Graph neural networks have been demonstrated as a powerful paradigm for effectively learning graph-structured data on the web and mining content from it. Current leading graph models require a large number of labeled samples for training, which unavoidably leads to overfitting in few-shot scenarios. Recent research has sought to alleviate this issue by simultaneously leveraging graph learning and meta-learning paradigms. However, these graph meta-learning models assume the availability of numerous meta-training tasks to learn transferable meta-knowledge. Such assumption may not be feasible in the real world due to the difficulty of constructing tasks and the substantial costs involved. Therefore, we propose a SiMple yet effectIve approach for graph few-shot Learning with fEwer tasks, named SMILE. We introduce a dual-level mixup strategy, encompassing both within-task and across-task mixup, to simultaneously enrich the available nodes and tasks in meta-learning. Moreover, we explicitly leverage the prior information provided by the node degrees in the graph to encode expressive node representations. Theoretically, we demonstrate that SMILE can enhance the model generalization ability. Empirically, SMILE consistently outperforms other competitive models by a large margin across all evaluated datasets with in-domain and cross-domain settings. Our anonymous code can be found here.

Dual-level Mixup for Graph Few-shot Learning with Fewer Tasks / Liu, Y.; Li, M.; Giunchiglia, F.; Huang, L.; Li, X.; Feng, X.; Guan, R.. - (2025), pp. 2646-2656. ( 34th ACM Web Conference, WWW 2025 Sydney Convention and Exhibition Centre, aus 2025) [10.1145/3696410.3714905].

Dual-level Mixup for Graph Few-shot Learning with Fewer Tasks

Liu Y.;Giunchiglia F.;
2025-01-01

Abstract

Graph neural networks have been demonstrated as a powerful paradigm for effectively learning graph-structured data on the web and mining content from it. Current leading graph models require a large number of labeled samples for training, which unavoidably leads to overfitting in few-shot scenarios. Recent research has sought to alleviate this issue by simultaneously leveraging graph learning and meta-learning paradigms. However, these graph meta-learning models assume the availability of numerous meta-training tasks to learn transferable meta-knowledge. Such assumption may not be feasible in the real world due to the difficulty of constructing tasks and the substantial costs involved. Therefore, we propose a SiMple yet effectIve approach for graph few-shot Learning with fEwer tasks, named SMILE. We introduce a dual-level mixup strategy, encompassing both within-task and across-task mixup, to simultaneously enrich the available nodes and tasks in meta-learning. Moreover, we explicitly leverage the prior information provided by the node degrees in the graph to encode expressive node representations. Theoretically, we demonstrate that SMILE can enhance the model generalization ability. Empirically, SMILE consistently outperforms other competitive models by a large margin across all evaluated datasets with in-domain and cross-domain settings. Our anonymous code can be found here.
2025
WWW 2025 - Proceedings of the ACM Web Conference
1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES
Association for Computing Machinery, Inc
9798400712746
Liu, Y.; Li, M.; Giunchiglia, F.; Huang, L.; Li, X.; Feng, X.; Guan, R.
Dual-level Mixup for Graph Few-shot Learning with Fewer Tasks / Liu, Y.; Li, M.; Giunchiglia, F.; Huang, L.; Li, X.; Feng, X.; Guan, R.. - (2025), pp. 2646-2656. ( 34th ACM Web Conference, WWW 2025 Sydney Convention and Exhibition Centre, aus 2025) [10.1145/3696410.3714905].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/464200
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact