
With the tremendous success of data-driven approaches in pattern recognition, natural language processing and decision-making for robotic tasks, the debate on the poor interpretability of the “black-box” model becomes more intense. The problem of non-interpretability impacts their applicability and has therefore become a major research focus. Taking image classification on ImageNet as an example, although a simple ResNet-based classifier can readily achieve a very high accuracy that was unattainable just a few years ago, the complexity within its highly tangled network modules makes it impossible to prove the robustness and performance of its classification given unseen data. Hence, a huge demand arises for tools to structurally inspect the learned “knowledge” of a neural network. Emergent post-hoc visualization techniques, like CAM, LIME and Grad-CAM attempt to significantly improve explainability, by visually making predictions from Convolutional Neural Networks (CNNs) more transparent, though the full CNN-based model itself is usually uninterpretable. The challenge of the VeriKAS project is that the computations of the model can be better interpreted by humans in an appropriate manner. The VeriKAS project is a BMWK funded project that aims to realize robust explanation and future certification of deep learning-based systems, in collaboration with our project partners ZAL, HITeC and hs2.
Pl: Prof. Dr. Stefan Wermter
Associates: Dr. Sven Magg, Dr. Matthias Kerzel, Wenhao Lu