Fairness, Accountability, Transparency
Biases can lead to systematic disadvantages for marginalized individuals and groups — and they can arise in any point in the AI development lifecycle. To increase the accountability of high-risk AI systems, we're developing technologies to increase their end-to-end transparency and fairness.
Our work
IBM’s safety checkers top a new AI benchmark
NewsKim MartineauIBM’s Mikhail Yurochkin wants to make AI’s “cool” factor tangible
ResearchKim MartineauIBM reaffirms its commitment to the Rome Call for AI ethics
NewsMike MurphyWhat is red teaming for generative AI?
ExplainerKim MartineauThe latest AI safety method is a throwback to our maritime past
ResearchKim MartineauWhat is AI alignment?
ExplainerKim Martineau- See more of our work on Fairness, Accountability, Transparency
Projects
We're developing technological solutions to assist subject matter experts with their scientific workflows by enabling the Human-AI co-creation process.
Publications
The Literary Canons of Large-Language Models: An Exploration of the Frequency of Novel and Author Generations Across Gender, Race and Ethnicity, and Nationality
- Paulina Toro Isaza
- Nalani Kopp
- 2025
- NAACL 2025
Which Contributions Deserve Credit? Perceptions of Attribution in Human-AI Co-Creation
- 2025
- CHI 2025
Responsible Prompting Recommendation: Fostering Responsible AI Practices in Prompting-Time
- 2025
- CHI 2025
Ethical Co-Development of AI Applications with Indigenous Communities
- Claudio Santos Pinhanez
- Edem Wornyo
- 2025
- CHI 2025
Explain Yourself, Briefly! Self-Explaining Neural Networks with Concise Sufficient Reasons
- Shahaf Bassan
- Ron Eliav
- et al.
- 2025
- ICLR 2025
Usage Governance Advisor: From Intent to AI Governance
- 2025
- AAAI 2025