Pavel Klavík, A. Cristiano I. Malossi, et al.
Philos. Trans. R. Soc. A
We demonstrate an interactive visualization system to promote interpretability of convolutional neural networks (CNNs). Interpretation of deep learning models acts on the interface between increasingly complex model architectures and model architects, to provide an understanding of how a model operates, where it fails, or why it succeeds. Based on preliminary expert interviews and a careful literature review we design the system to comprehensively support architects on 4 visual dimensions.
Pavel Klavík, A. Cristiano I. Malossi, et al.
Philos. Trans. R. Soc. A
Erik Altman, Jovan Blanusa, et al.
NeurIPS 2023
Conrad Albrecht, Jannik Schneider, et al.
CVPR 2025
Miao Guo, Yong Tao Pei, et al.
WCITS 2011