Axiom-Aware FunSearch for Non-Constructive Mathematics
Max Esposito, Besart Shyti
NeurIPS 2025
Compositional generalization—a key open challenge in modern machine learning—requires models to predict unknown combinations of known concepts. However, assessing compositional generalization remains a fundamental challenge due to the lack of standardized evaluation protocols and the limitations of current benchmarks, which often favor efficiency over rigor. At the same time, general-purpose vision architectures lack the necessary inductive biases, and existing approaches to endow them compromise scalability. As a remedy, this paper introduces: 1) a rigorous evaluation framework that unifies and extends previous approaches while reducing computational requirements from combinatorial to constant; 2) an extensive and modern evaluation on the status of compositional generalization in supervised vision backbones, training more than 5000 models; 3) Attribute Invariant Networks, a class of models establishing a new Pareto frontier in compositional generalization, achieving a 23.43% accuracy improvement over baselines while reducing parameter overhead from 600% to 16% compared to fully disentangled counterparts.
Max Esposito, Besart Shyti
NeurIPS 2025
Jung koo Kang
NeurIPS 2025
C.A. Micchelli, W.L. Miranker
Journal of the ACM
Isha Puri, Shivchander Sudalairaj, et al.
NeurIPS 2025