Provably Powerful Graph Neural Networks for Directed Multigraphs
Béni Egressy, Luc von Niederhäusern, et al.
AAAI 2024
Making statements about the performance of trained models on tasks involving new data is one of the primary goals of machine learning, i.e., to understand the generalization power of a model. Various capacity measures try to capture this ability, but usually fall short in explaining important characteristics of models that we observe in practice. In this study, we propose the local effective dimension as a capacity measure which seems to correlate well with generalization error on standard data sets. Importantly, we prove that the local effective dimension bounds the generalization error and discuss the aptness of this capacity measure for machine learning models.
Béni Egressy, Luc von Niederhäusern, et al.
AAAI 2024
Divyansh Jhunjhunwala, Neharika Jali, et al.
ISIT 2024
Paulo Rodrigo Cavalin, Pedro Henrique Leite Da Silva Pires Domingues, et al.
ACL 2023
Hiroki Yanagisawa, Kohei Miyaguchi, et al.
NeurIPS 2022