Program equivalence and context-free grammars
Barry K. Rosen
SWAT 1972
In this paper, we present a self-generating modular neural network architecture for supervised learning. In the architecture, any kind of feedforward neural networks can be employed as component nets. For a given task, a tree-structured modular neural network is automatically generated with a growing algorithm by partitioning input space recursively to avoid the problem of pre-determined structure. Due to the principle of divide-and- conquer used in the proposed architecture, the modular neural network can yield both good performance and significantly faster training. The proposed architecture has been applied to several supervised learning tasks and has achieved satisfactory results.
Barry K. Rosen
SWAT 1972
Zhikun Yuen, Paula Branco, et al.
DSAA 2023
C.A. Micchelli, W.L. Miranker
Journal of the ACM
Michael Muller, Anna Kantosalo, et al.
CHI 2024