Fast Training of Deep Neural Networks for Speech Recognition
Guojing Cong, Brian Kingsbury, et al.
ICASSP 2020
We propose algorithms and techniques to accelerate training of deep neural networks for action recognition on a cluster of GPUs. The convergence analysis of our algorithm shows it is possible to reduce communication cost and at the same time minimize the number of iterations needed for convergence. We customize the Adam optimizer for our distributed algorithm to improve efficiency. In addition, we employ transfer-learning to further reduce training time while improving validation accuracy. For the UCF101 and HMDB51 datasets, the validation accuracies achieved are 93.1% and 67.9% respectively. With an additional end-to-end trained temporal stream, the validation accuracies achieved for UCF101 and HMDB51 are 93.47% and 81.24% respectively. As far as we know, these are the highest accuracies achieved with the two-stream approach using ResNet that does not involve computationally expensive 3D convolutions or pretraining on much larger datasets.
Guojing Cong, Brian Kingsbury, et al.
ICASSP 2020
David A. Bader, Guojing Cong
Journal of Parallel and Distributed Computing
I.-Hsin Chung, Changhoan Kim, et al.
SC 2012
Onkar Bhardwaj, Guojing Cong
MLHPC 2016