New Frontiers of Human-centered Explainable AI (HCXAI): Participatory Civic AI, Benchmarking LLMs and Hallucinations for XAI, and Responsible AI AuditsUpol EhsanElizabeth Watkinset al.2025CHI 2025
Explain Yourself, Briefly! Self-Explaining Neural Networks with Concise Sufficient ReasonsShahaf BassanRon Eliavet al.2025ICLR 2025
User centered approach of applicability domain analysis for improving AI-assisted decision-makingSiya KundeEmilio Ashton Vital Brazil2025ACS Spring 2025
Extracting Electrolyte Design from Interpretable Data-Driven MethodsVidushi SharmaMaxwell Giammonaet al.2025ACS Spring 2025
Neural Reasoning Networks: Efficient interpretable neural networks with automatic textual explanationsSteve CarrowKyle Harper Erwinet al.2025AAAI 2025
Leveraging Interpretability in the Transformer to Automate the Proactive Scaling of Cloud ResourcesAmadou BaPavithra Harshaet al.2025AAAI 2025
Epistemic Bias as a Means for the Automated Detection of Injustices in TextKenya AndrewsLamogha Chiazor2025AAAI 2025
Bridging the Gap Between AI Planning and Reinforcement LearningZlatan Ajanovi{\'c}Timo Groset al.2025AAAI 2025
How well can a large language model explain business processes as perceived by users?Dirk FahlandFabiana Fournieret al.2025DKE
Optimal Transport for Efficient, Unsupervised Anomaly Detection on Industrial DataAbigail LangbridgeFearghal O'Donnchaet al.2024Big Data 2024