Alain Vaucher, Philippe Schwaller, et al.
AMLD EPFL 2022
A data analyst might worry about generalization if dropping a very small fraction of data points from a study could change its substantive conclusions. Checking this non-robustness directly poses a combinatorial optimization problem and is intractable even for simple models and moderate data sizes. Recently various authors have proposed a diverse set of approximations to detect this non-robustness. In the present work, we show that, even in a setting as simple as ordinary least squares (OLS) linear regression, many of these approximations can fail to detect (true) non-robustness in realistic data arrangements. We focus on OLS in the present work due its widespread use and since some approximations work only for OLS. Across our synthetic and real-world data sets, we find that a simple recursive greedy algorithm is the sole algorithm that does not fail any of our tests and also that it can be orders of magnitude faster to run than some competitors.
Alain Vaucher, Philippe Schwaller, et al.
AMLD EPFL 2022
Joxan Jaffar
Journal of the ACM
Bemali Wickramanayake, Zhipeng He, et al.
Knowledge-Based Systems
Amarachi Blessing Mbakwe, Joy Wu, et al.
NeurIPS 2023