Jehanzeb Mirza, Leonid Karlinsky, et al.
NeurIPS 2023
Dataset distillation generates a small set of information-rich instances from a large dataset, resulting in reduced storage requirements, privacy or copyright risks, and computational costs for downstream modeling, though much of the research has focused on the image data modality. We study tabular data distillation, which brings in novel challenges such as the inherent feature heterogeneity and the common use of non-differentiable learning models (such as decision tree ensembles and nearest-neighbor predictors). To mitigate these challenges, we present TDColER, a tabular data distillation framework via column embeddings-based representation learning. To evaluate this framework, we also present a tabular data distillation benchmark, TDBench. Based on an elaborate evaluation on TDBench, resulting in 226,890 distilled datasets and 548,880 models trained on them, we demonstrate that TDColER is able to boost the distilled data quality of off-the-shelf distillation schemes by 0.5-143% across 7 different tabular learning models. All of the code used in the experiments can be found in http://github.com/inwonakng/tdbench.
Jehanzeb Mirza, Leonid Karlinsky, et al.
NeurIPS 2023
Michael Hersche, Mustafa Zeqiri, et al.
NeSy 2023
Ryan Johnson, Ippokratis Pandis
CIDR 2013
Dzung Phan, Vinicius Lima
INFORMS 2023