Fan Jing Meng, Ying Huang, et al.
ICEBE 2007
This paper describes a set of feedforward neural network learning algorithms based on classical quasi-Newton optimization techniques which are demonstrated to be up to two orders of magnitude faster than backward-propagation. Then, through initial scaling of the inverse Hessian approximate, which makes the quasi-Newton algorithms invariant to scaling of the objective function, the learning performance is further improved. Simulations show that initial scaling improves the rate of learning of quasi-Newton-based algorithms by up to 50%. Overall, more than two to three orders of magnitude improvement is achieved compared to backward-propagation. Finally, the best of these learning methods is used in developing a small writer-dependent online handwriting recognizer for digits (0 through 9). The recognizer labels the training data correctly with an accuracy of 96.66%.
Fan Jing Meng, Ying Huang, et al.
ICEBE 2007
Elena Cabrio, Philipp Cimiano, et al.
CLEF 2013
William Hinsberg, Joy Cheng, et al.
SPIE Advanced Lithography 2010
David S. Kung
DAC 1998