Erik Altman, Jovan Blanusa, et al.
NeurIPS 2023
This paper explores how humans make contextual moral judgments to inform the development of AI systems capable of balancing rule-following with flexibility. We investigate the limitations of rigid constraints in AI, which can hinder morally acceptable actions in specific contexts, unlike humans who can override rules when appropriate. We propose a preference-based graphical model inspired by dual-process theories of moral judgment and conduct a study on human decisions about breaking the social norm of "no cutting in line." Our model outperforms standard machine learning methods in predicting human judgments and offers a generalizable framework for modeling moral decision-making across various contexts. This short paper summarizes the main findings of our paper published in the journal Autonomous Agents and Multi-Agent Systems. [2].
Erik Altman, Jovan Blanusa, et al.
NeurIPS 2023
Pavel Klavík, A. Cristiano I. Malossi, et al.
Philos. Trans. R. Soc. A
Conrad Albrecht, Jannik Schneider, et al.
CVPR 2025
Miao Guo, Yong Tao Pei, et al.
WCITS 2011