TRAIL at IJCNN25
30. Juni 2025, von Theresa Pekarek Rosin

Foto: UHH Knowledge Technology

Two of our DCs, Tien Pham and Theodor Wulff, presented their work in talks at the International Joint Conference of Neural Networks 2025 in Rome, Italy.
Tien presented an Interpretable Feature Extractor (IFE) which uses an attention mechanism during feature extraction stage, revealing the important score of each pixels in visual input towards the generated action. The proposed framework is evaluated in ATARI environment to show the advancement in Interpretability compared to Multi-head Attention or Gradcam approaches. His paper is titled: Pay Attention to What and Where? Interpretable Feature Extractor in Vision-based Deep Reinforcement Learning.
Theodor's work titled "Joint Action Language Modelling for Transparent Policy Execution"
presents a framework where an agent's policy is learned as a language generation task, requiring the model to produce a natural language statement describing its upcoming action alongside generating the action tokens themselves, a process that was found to impact both transparency and task performance.
The papers will be published in the conference proceedings but the preprints are already available:
- Pay Attention to What and Where? Interpretable Feature Extractor in Vision-based Deep Reinforcement Learning. Code. Web page,
- Joint Action Language Modelling for Transparent Policy Execution. Code.

