Publication
ICML 2021
Workshop paper
Leveraging Theoretical Tradeoffs in Hyperparameter Selection for Improved Empirical Performance
Abstract
The tradeoffs in the excess risk incurred from data-driven learning of a single model has been studied by decomposing the excess risk into approximation, estimation and optimization errors. In this paper, we focus on the excess risk incurred in data-driven hyperparameter optimization (HPO) and its interaction with approximate empirical risk minimization (ERM) necessitated by large data. We present novel bounds for the excess risk in various common scenarios in HPO. Based on these results, we propose practical heuristics that allow us to improve performance or reduce computational overhead of data-driven HPO, demonstrating over 2× speedup with no loss in predictive performance in our preliminary results.