Grace-Rose Williams, Jared Somerville, et al.
SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI 2017
Today, machine-learning software is used to help make decisions that affect people's lives. Some people believe that the application of such software results in fairer decisions because, unlike humans, machine-learning software generates models that are not biased. Think again. Machine-learning software is also biased, sometimes in similar ways to humans, often in different ways. While fair model- assisted decision making involves more than the application of unbiased models-consideration of application context, specifics of the decisions being made, resolution of conflicting stakeholder viewpoints, and so forth-mitigating bias from machine-learning software is important and possible but difficult and too often ignored.
Grace-Rose Williams, Jared Somerville, et al.
SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI 2017
Paul R. Carini, Michael Hind
ACM SIGPLAN Notices
Sebastian Stein, Soheil Eshghi, et al.
SocInf 2017
Arup Baruah, Ferdous Ahmed Barbhuiya, et al.
FIRE 2019