Text classification is a fundamental task with broad applications in natural language processing. Recently, graph neural networks (GNNs) have attracted much attention due to their powerful representation ability. However, most existing methods for text classification based on GNNs consider only one-hop neighborhoods and low-frequency information within texts, which cannot fully utilize the rich context information of documents. Moreover, these models suffer from over-smoothing issues if many graph layers are stacked. In this paper, a Deep Attention Diffusion Graph Neural Network (DADGNN) model is proposed to learn text representations, bridging the chasm of interaction difficulties between a word and its distant neighbors. Experimental results on various standard benchmark datasets demonstrate the superior performance of the present approach.
Deep Attention Diffusion Graph Neural Networks for Text Classification / Liu, Yonghao; Guan, Renchu; Giunchiglia, Fausto; Liang, Yanchun; Feng, Xiaoyue. - (2021), pp. 8142-8152. (Intervento presentato al convegno EMNLP tenutosi a Online and Punta Cana, Dominican Republic nel 7-11 November 2021) [10.18653/v1/2021.emnlp-main.642].
Deep Attention Diffusion Graph Neural Networks for Text Classification
Giunchiglia, Fausto;Liang, Yanchun;
2021-01-01
Abstract
Text classification is a fundamental task with broad applications in natural language processing. Recently, graph neural networks (GNNs) have attracted much attention due to their powerful representation ability. However, most existing methods for text classification based on GNNs consider only one-hop neighborhoods and low-frequency information within texts, which cannot fully utilize the rich context information of documents. Moreover, these models suffer from over-smoothing issues if many graph layers are stacked. In this paper, a Deep Attention Diffusion Graph Neural Network (DADGNN) model is proposed to learn text representations, bridging the chasm of interaction difficulties between a word and its distant neighbors. Experimental results on various standard benchmark datasets demonstrate the superior performance of the present approach.File | Dimensione | Formato | |
---|---|---|---|
2021.emnlp-main.642.pdf
accesso aperto
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Creative commons
Dimensione
639.25 kB
Formato
Adobe PDF
|
639.25 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione