Custom-Design of FDR Encodings: The Case of Red-Black Planning
Daniel Fišer, Daniel Gnad, et al.
IJCAI 2021
In this demonstration, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent that can support human in the loop decision making. Imposing transparency and ex-plainability requirements on such agents is crucial for establishing human trust and common ground with an end-to-end automated planning system. Visualizing the agent's internal decision making processes is a crucial step towards achieving this. This may include externalizing the “brain” of the agent: starting from its sensory inputs, to progressively higher order decisions made by it in order to drive its planning components. We demonstrate these functionalities in the context of a smart assistant in the Cognitive Environments Laboratory at IBM's T.J. Watson Research Center.
Daniel Fišer, Daniel Gnad, et al.
IJCAI 2021
Stefano V. Albrecht, J. Christopher Beck, et al.
AAAI 2015
Carlos Hernández Ulloa, Adi Botea, et al.
IJCAI 2017
Masataro Asai, Christian Muise
IJCAI 2020