Zijian Ding, Michelle Brachman, et al.
C&C 2025
Ensuring group fairness in federated learning (FL) presents unique challenges due to data heterogeneity and communication constraints. We propose Kernel Fair Federated Learning (KFFL), a novel algorithmic framework that incorporates group fairness into FL models using the Kernel Hilbert-Schmidt Independence Criterion (KHSIC) as a fairness regularizer. To address scalability, KFFL approximates the KHSIC with random feature maps, significantly reducing computational and communication overhead while achieving group fairness. To address the resulting non-convex composite optimization problem, we propose FedProxGrad, a federated proximal gradient algorithm that guarantees convergence. Through experiments on standard benchmark datasets across both IID and Non-IID settings for regression and classification tasks, KFFL demonstrates its ability to balance accuracy and fairness effectively, outperforming existing methods by comprehensively exploring the accuracy–fairness trade-offs. Furthermore, we introduce KFFL-TD, a time-delayed variant that further reduces communication rounds, enhancing efficiency in decentralized environments. Code is available at github.com/Huzaifa-Arif/KFFL.
Zijian Ding, Michelle Brachman, et al.
C&C 2025
Shachar Don-Yehiya, Leshem Choshen, et al.
ACL 2025
Guillaume Buthmann, Tomoya Sakai, et al.
ICASSP 2025
Rei Odaira, Jose G. Castanos, et al.
IISWC 2013