Conference paper
Conference paper
Bias Mitigation Post-processing for Individual and Group Fairness
Abstract
Whereas previous post-processing approaches for increasing the fairness of predictions of biased classifiers address only group fairness, we propose a method for increasing both individual and group fairness. Our novel framework includes an individual bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. We show superior performance to previous work in the combination of classification accuracy, individual fairness and group fairness on several real-world datasets in applications such as credit, employment, and criminal justice.
Related
Conference paper
Black box fairness testing of machine learning models
Conference paper
Design diagrams as ontological source
Conference paper