In recent years, deep learning researchers have been increasingly interested in developing architectures able to operate on data abstracted as graphs, i.e., Graph Neural Networks (GNNs). At the same time, there has been a surge in the number of commercial AI systems deployed for real-world applications. At their core, the majority of these systems are based on black-box deep learning models, such as GNNs, greatly limiting their accountability and trustworthiness. The idea underpinning this paper is to exploit the representational power of graph variational autoencoders to learn an embedding space where a “convolution” between local structures and latent vectors can take place. The key intuition is that this embedding space can then be used to decode the learned latent vectors into more interpretable latent structures. Our experiments validate the performance of our model against widely used alternatives on standard graph benchmarks, while also showing the ability to probe the model decisions by visualising the learned structural patterns.

LESI-GNN: An Interpretable Graph Neural Network Based on Local Structures Embedding

Minello, Giorgia;Bicciato, Alessandro;Torsello, Andrea;Cosmo, Luca
2025-01-01

Abstract

In recent years, deep learning researchers have been increasingly interested in developing architectures able to operate on data abstracted as graphs, i.e., Graph Neural Networks (GNNs). At the same time, there has been a surge in the number of commercial AI systems deployed for real-world applications. At their core, the majority of these systems are based on black-box deep learning models, such as GNNs, greatly limiting their accountability and trustworthiness. The idea underpinning this paper is to exploit the representational power of graph variational autoencoders to learn an embedding space where a “convolution” between local structures and latent vectors can take place. The key intuition is that this embedding space can then be used to decode the learned latent vectors into more interpretable latent structures. Our experiments validate the performance of our model against widely used alternatives on standard graph benchmarks, while also showing the ability to probe the model decisions by visualising the learned structural patterns.
2025
Lecture Notes in Computer Science
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5090329
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact