Victor Akinwande, Megan Macgregor, et al.
IJCAI 2024
In this paper, we address the problem of capturing graph directionality using transformers. Most existing graph transformers typically capture distances between graph nodes and do not take edge direction into account. This is a limiting assumption since many graph applications need to exploit sophisticated relationships in graph data, such as time, causality, or generic dependency constraints. We introduce a novel graph transformer architecture that explicitly takes into account the directionality between connected graph nodes. To achieve this, we make use of dual encodings to represent both potential roles, i.e., source or target, of each pair of vertices linked by a directed edge. These encodings are learned by leveraging the latent adjacency information extracted from a directional attention module, localized with k-hop neighborhood information. Extensive experiments on synthetic and real graph datasets show that our approach can have significant accuracy gains over previous graph transformer (GT) and graph neural network (GNN) approaches, providing state-of-the-art (SOTA) results on inherently directed graphs.
Victor Akinwande, Megan Macgregor, et al.
IJCAI 2024
Hannah Kim, Celia Cintas, et al.
IJCAI 2023
Ankit Vishnubhotla, Charlotte Loh, et al.
NeurIPS 2023
Chen-chia Chang, Wan-hsuan Lin, et al.
ICML 2025