Predicting knowledge in an ontology stream
Freddy Lécué, Jeff Z. Pan
IJCAI 2013
Dataset distillation generates a small set of information-rich instances from a large dataset, resulting in reduced storage requirements, privacy or copyright risks, and computational costs for downstream modeling, though much of the research has focused on the image data modality. We study tabular data distillation, which brings in novel challenges such as the inherent feature heterogeneity and the common use of non-differentiable learning models (such as decision tree ensembles and nearest-neighbor predictors). To mitigate these challenges, we present TDColER, a tabular data distillation framework via column embeddings-based representation learning. To evaluate this framework, we also present a tabular data distillation benchmark, TDBench. Based on an elaborate evaluation on TDBench, resulting in 226,890 distilled datasets and 548,880 models trained on them, we demonstrate that TDColER is able to boost the distilled data quality of off-the-shelf distillation schemes by 0.5-143% across 7 different tabular learning models. All of the code used in the experiments can be found in http://github.com/inwonakng/tdbench.
Freddy Lécué, Jeff Z. Pan
IJCAI 2013
Zahra Ashktorab, Djallel Bouneffouf, et al.
IJCAI 2025
R. Sebastian, M. Weise, et al.
ECPPM 2022
Zijian Ding, Michelle Brachman, et al.
C&C 2025