Abstract
Several machine learning problems can be naturally defined over graph data. Recently, many researchers have been focusing on the definition of neural networks for graphs. The core idea is to learn a hidden representation for the graph vertices, with a convolutive or recurrent mechanism. When considering discriminative tasks on graphs, such as classification or regression, one critical component to design is the readout function, i.e. the mapping from the set of vertex representations to a fixed-size vector (or the output). Different approaches have been presented in literature, but recent approaches tend to be complex, making the training of the whole network harder. In this paper, we frame the problem in the setting of learning over sets. Adopting recently proposed theorems over functions defined on sets, we propose a simple but powerful formulation for a readout layer that can encode or approximate arbitrarily well any continuous permutation-invariant function over sets. Experimental results on real-world graph datasets show that, compared to other approaches, the proposed readout architecture can improve the predictive performance of Graph Neural Networks while being computationally more efficient.
Original language | English |
---|---|
Title of host publication | 2019 International Joint Conference on Neural Networks, IJCNN 2019 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9781728119854 |
DOIs | |
State | Published - Jul 2019 |
Externally published | Yes |
Publication series
Name | Proceedings of the International Joint Conference on Neural Networks |
---|---|
Volume | 2019-July |
Bibliographical note
Publisher Copyright:© 2019 IEEE.
Keywords
- learning representations for structured data
- machine learning for structured data
- neural networks for graphs
ASJC Scopus subject areas
- Software
- Artificial Intelligence