New Project: "Verification of learning AI applications (VERIKAS)"
9 September 2021
Photo: UHH/Knowledge Technology
Knowledge Technology is commencing a new research project "Verification of learning AI applications (VeriKAS)".
PI: Prof. Dr. Stefan Wermter
Associates: Wenhao Lu, Dr. Sven Magg, Dr. Matthias Kerzel
VeriKAS is a BMWi funded project that aims at realizing robust explanation and future certification of deep learning-based systems, in collaboration with our project partners ZAL, HITeC and hs2. With the tremendous success of data-driven approaches in pattern recognition, natural language processing and decision-making for robotic tasks, the debate on the poor interpretability of the “black-box” model becomes more intense. The problem of non-interpretability impacts their applicability, and has therefore become a major research focus. Taking image classification on ImageNet as an example, although a simple ResNet-based classifier can readily achieve a very high accuracy that was unattainable just a few years ago, the complexity within its highly tangled network modules makes it impossible to prove the robustness and performance of its classification given unseen data. Hence, a huge demand arises for tools to structurally inspect the learned “knowledge” of a neural network. Emergent post-hoc visualization techniques, like CAM, LIME and Grad-CAM attempt to significantly improve explainability, by visually making predictions from Convolutional Neural Networks (CNNs) more transparent, though the full CNN-based model itself is usually uninterpretable. The challenge of the VeriKAS project is that the computations of the model can be better interpreted by humans in an appropriate manner.