Network Representation Learning (NRL) enables the application of machine learning tasks such as classification, prediction and recommendation to networks. Apart from their graph structure, networks are often associated with diverse information in the form of attributes. Most NRL methods have focused just on structural information, and separately apply a traditional representation learning on attributes. When multiple sources of information are available, using a combination of them may be beneficial as they complement each other in generating accurate contexts; moreover, their combined use may be fundamental when the information sources are sparse. The learning methods should thus preserve both the structural and attribute aspects. In this paper, we investigate how attributes can be modeled, and subsequently used along with structural information in learning the representation. We introduce the GAT2VEC framework that uses structural information to generate structural contexts, attributes to generate attribute contexts, and employs a shallow neural network model to learn a joint representation from them. We evaluate our proposed method against state-of-the-art baselines, using real-world datasets on vertex classification (multi-class and multi-label), link-prediction, and visualization tasks. The experiments show that GAT2VEC is effective in exploiting multiple sources of information, thus learning accurate representations and outperforming the state-of-the-art in the aforementioned tasks. Finally, we perform query tasks on learned representation and show how the qualitative analysis of results has better performance as well.
GAT2VEC: Representation learning for attributed graphs / Sheikh, Nasrullah; Kefato, Zekarias; Montresor, Alberto. - In: COMPUTING. - ISSN 0010-485X. - STAMPA. - 2019/101:3(2019), pp. 187-209. [10.1007/s00607-018-0622-9]
GAT2VEC: Representation learning for attributed graphs
Sheikh, Nasrullah;Kefato, Zekarias;Montresor, Alberto
2019-01-01
Abstract
Network Representation Learning (NRL) enables the application of machine learning tasks such as classification, prediction and recommendation to networks. Apart from their graph structure, networks are often associated with diverse information in the form of attributes. Most NRL methods have focused just on structural information, and separately apply a traditional representation learning on attributes. When multiple sources of information are available, using a combination of them may be beneficial as they complement each other in generating accurate contexts; moreover, their combined use may be fundamental when the information sources are sparse. The learning methods should thus preserve both the structural and attribute aspects. In this paper, we investigate how attributes can be modeled, and subsequently used along with structural information in learning the representation. We introduce the GAT2VEC framework that uses structural information to generate structural contexts, attributes to generate attribute contexts, and employs a shallow neural network model to learn a joint representation from them. We evaluate our proposed method against state-of-the-art baselines, using real-world datasets on vertex classification (multi-class and multi-label), link-prediction, and visualization tasks. The experiments show that GAT2VEC is effective in exploiting multiple sources of information, thus learning accurate representations and outperforming the state-of-the-art in the aforementioned tasks. Finally, we perform query tasks on learned representation and show how the qualitative analysis of results has better performance as well.File | Dimensione | Formato | |
---|---|---|---|
computing18.pdf
accesso aperto
Descrizione: Manuscript version
Tipologia:
Post-print referato (Refereed author’s manuscript)
Licenza:
Altra licenza (Other type of license)
Dimensione
1.05 MB
Formato
Adobe PDF
|
1.05 MB | Adobe PDF | Visualizza/Apri |
Sheikh2019_Article_Gat2vecRepresentationLearningF.pdf
Solo gestori archivio
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.42 MB
Formato
Adobe PDF
|
1.42 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione