Changnian Han, Peng Zhang, et al.
Journal of Computational Physics
The training data and models are becoming increasingly large in many deep-learning applications. Large-scale distributed processing is employed to accelerate training. Increasing the number of learners in synchronous and asynchronous stochastic gradient descent presents challenges to convergence and communication performance. We present our hierarchical, bulk-synchronous stochastic gradient algorithm that effectively balances execution time and accuracy for training in deep-learning applications on GPU clusters. It achieves much better convergence and execution time at scale in comparison to asynchronous stochastic gradient descent implementations. When deployed on a cluster of 128 GPUs, our implementation achieves up to 56 times speedups over the sequential stochastic gradient descent with similar test accuracy for our target application.
Changnian Han, Peng Zhang, et al.
Journal of Computational Physics
Guojing Cong, David A. Bader
Journal of Parallel and Distributed Computing
Onkar Bhardwaj, Guojing Cong
MLHPC 2016
David A. Bader, Guojing Cong
Journal of Parallel and Distributed Computing