Gabriele Picco, Lam Thanh Hoang, et al.
EMNLP 2021
Compositionality is thought to be a key component of language, and various compositional benchmarks have been developed to empirically probe the compositional generalization of existing sequence processing models. These benchmarks often highlight failures of existing models, but it is not clear why these models fail in this way. In this paper, we seek to theoretically understand the role the compositional structure of the models plays in these failures and how this structure relates to their expressivity and sample complexity. We propose a general neuro-symbolic definition of compositional functions and their compositional complexity. We then show how various existing general and special purpose sequence processing models (such as recurrent, convolution and attention-based ones) fit this definition and use it to analyze their compositional complexity.
Gabriele Picco, Lam Thanh Hoang, et al.
EMNLP 2021
Paulito Palmes, Akihiro Kishimoto, et al.
JuliaCon 2023
Elliot Nelson, Debarun Bhattacharjya, et al.
UAI 2022
Thabang Lebese, Ndivhuwo Makondo, et al.
NeurIPS 2021