Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
Graph contrastive learning has made remarkable advances in settings where there is a scarcity of task-specific labels. Despite these advances, the significant computational overhead for representation inference incurred by existing methods that rely on intensive message passing makes them unsuitable for latency-constrained applications. In this paper, we present GraphECL, a simple and efficient contrastive learning method for fast inference on graphs. GraphECL does away with the need for expensive message passing during inference. Specifically, it introduces a novel coupling of the MLP and GNN models, where the former learns to computationally efficiently mimic the computations performed by the latter. We provide a theoretical analysis showing why MLP can capture essential structural information in neighbors well enough to match the performance of GNN in downstream tasks. The extensive experiments on widely used real-world benchmarks that show that GraphECL achieves superior performance and inference efficiency compared to state-of-the-art graph constrastive learning (GCL) methods on homophilous and heterophilous graphs.
Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
Kevin Gu, Eva Tuecke, et al.
ICML 2024
Natalia Martinez Gil, Dhaval Patel, et al.
UAI 2024
Georgios Kollias, Payel Das, et al.
ICML 2024