Weichao Mao, Haoran Qiu, et al.
NeurIPS 2022
We consider the task of training machine learning models with data-dependent constraints. Such constraints often arise as empirical versions of expected value constraints that enforce fairness or stability goals. We reformulate data-dependent constraints so that they are calibrated: enforcing the reformulated constraints guarantees that their expected value counterparts are satisfied with a user-prescribed probability. The resulting optimization problem is amendable to standard stochastic optimization algorithms, and we demonstrate the efficacy of our method on a fairness-sensitive classification task where we wish to guarantee the classifier's fairness (at test time).
Weichao Mao, Haoran Qiu, et al.
NeurIPS 2022
Stephanie Houde, Vignesh Radhakrishna, et al.
NeurIPS 2022
Anthony Praino, Lloyd Treinish, et al.
AGU 2024
Ziyao Wang, Muneeza Azmat, et al.
ICML 2025