Dr. Jae Hee Lee

Photo: Jae Hee Lee
Postdoctoral Research Associate TRR CML Project
Knowledge Technology
Address
Office
Contact
About Me
I am a postdoctoral researcher in the Knowledge Technology Group, University of Hamburg. My research interest is in building robust multimodal language models that generalize to new tasks while retaining previously learned knowledge. My approach is to employ explainable AI and neuro-symbolic AI to elucidate and facilitate the construction of robust models. Previously, I worked on symbolic approaches to AI such as spatio-temporal reasoning and multiagent systems.
I am an associate editor of AI Communications (The European Journal on Artificial Intelligence) and co-organized the International Workshop on Spatio-Temporal Reasoning and Learning 2023 and 2024. I also regularly serve as a PC member of major AI and NLP conferences.
News
- Jan 2025 – Program committee member, IJCAI 2025.
- Dec 2024 – DFG proposal has been approved (individual research grant, with Prof. Stefan Wermter). The grant will fund two positions for three years. The project title is LUMO (Lifelong Multimodal Language Learning by Explaining and Exploiting Compositional Knowledge). Detailed information will be available in mid-2025 when the project begins.
- Nov 2024 – Mental Modeling of Reinforcement Learning Agents by Language Models has been accepted in Transactions on Machine Learning Research (TMLR).
- Sep 2024 – Program committee member, COLING 2025.
- Aug 2024 – Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go? has been accepted to the Explainable Computer Vision (eXCV) workshop at ECCV 2024.
- Jul 2024 – Associate editor of AI Communications, the European Journal on Artificial Intelligence.
more...
- Jun 2024 – Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models has been accepted in Transactions on Machine Learning Research (TMLR).
- May 2024 – Organizer of the LLM+XAI Reading Group at Knowledge Technology, University of Hamburg.
- Apr 2024 – From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks has been accepted in Neurosymbolic Artificial Intelligence Journal.
- Apr 2024 – Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic has been accepted to LREC-COLING 2024.
- Mar 2024 – Co-organizer of the 3rd International Workshop on Spatio-Temporal Reasoning and Learning.
- Feb 2024 – Program committee member, ECAI 2024.
- Jan 2024 – Causal State Distillation for Explainable Reinforcement Learning has been accepted to Causal Learning and Reasoning (CLeaR) 2024.
- Oct 2023 – Visually Grounded Continual Language Learning with Selective Specialization has been accepted to the Findings of EMNLP 2023.
- Sep 2023 – Program committee member, LREC-COLING 2024
- Sep 2023 – Björn Plüster, an MSc student whom I advise, developed a German LLM called LeoLM with LAION and hessian.AI.
- Apr 2023 – Internally Rewarded Reinforcement Learning has been accepted to ICML 2023.
- Jun 2023 – Program committee member, EMNLP 2023.
- May 2023 – Program committee member, ECAI 2023.
- Dec 2022 – Program committee member, ACL 2023.
- Dec 2022 – Program committee member, IJCAI 2023.
- Oct 2022 – Received ~10K € research funding for a student project on “Explainable Visual Question Answering” that I am advising. Check out our paper and code.
Selected Publications
: code : arXiv : project page.
Full publication list available on DBLP, Google Scholar, and Semantic Scholar.
Multimodal Learning
- K. Ahrens, L. Bengtson, J. H. Lee , S. Wermter, Visually Grounded Continual Language Learning with Selective Specialization, EMNLP Findings 2023.
- J. H. Lee, M. Kerzel, K. Ahrens, C. Weber, S. Wermter, What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning, IJCAI 2022.
- B. Plüster, J. Ambsdorf, L. Braach, J. H. Lee, S. Wermter, Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations, arXiv preprint, 2022.
- J. H. Lee, Y. Yao, O. Özdemir, M. Li, C. Weber, Z. Liu, S. Wermter, Spatial relation learning in complementary scenarios with deep neural networks, Frontiers in Neurorobotics, 2022.
- O. Özdemir, M. Kerzel, C. Weber, J. H. Lee, and S. Wermter, Language Model-Based Paired Variational Autoencoders for Robotic Language Learning, IEEE Transactions on Cognitive and Developmental Systems (TCDS), 2022.
- C. Volquardsen, J. H. Lee, C. Weber, and S. Wermter, More Diverse Training, Better Compositionality! Evidence from Multimodal Language Learning, ICANN 2022.
- M. Li, C. Weber, M. Kerzel, J. H. Lee, Z. Zeng, Z. Liu, S. Wermter, Robotic Occlusion Reasoning for Efficient Object Existence Prediction IROS 2021.
- A. Eisermann, J. H. Lee, C. Weber, S. Wermter, Generalization in Multimodal Language Learning from Simulation, IJCNN 2021.
Explainable Artificial Intelligence
- W. Lu, X. Zhao, J. Spisak, J. H. Lee, S. Wermter, Mental Modeling of Reinforcement Learning Agents by Language Models, Transactions on Machine Learning Research (TMLR), 2025.
- J. H. Lee, G. Mikriukov, G. Schwalbe, S. Wermter, D. Wolter, Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go? accepted to Explainable Computer Vision (eXCV) workshop at ECCV 2024.
- J. H. Lee, S. Lanza, S. Wermter, From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks, to appear in Neurosymbolic Artificial Intelligence Journal, 2024.
- W. Lu, X. Zhao, T. Fryen, J. H. Lee, M. Li, S. Magg, S. Wermter, Causal State Distillation for Explainable Reinforcement Learning, CLeaR 2024.
- B. Plüster, J. Ambsdorf, L. Braach, J. H. Lee, S. Wermter, Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations, arXiv preprint, 2022.
Neuro-Symbolic Artificial Intelligence:
- J. H. Lee, S. Lanza, S. Wermter, From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks, to appear in Neurosymbolic Artificial Intelligence Journal, 2024.
- X. Zhao, M. Li, W. Lu, C. Weber, J. H. Lee, K. Chu, S. Wermter, Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic, LREC-COLING 2024.
- J. H. Lee, M. Sioutis, K. Ahrens, M. Alirezaie, M. Kerzel, S. Wermter, Neuro-Symbolic Spatio-Temporal Reasoning, in Compendium of Neurosymbolic Artificial Intelligence, IOS Press, 2023, pp. 410–429.
- J. H. Lee, Y. Yao, O. Özdemir, M. Li, C. Weber, Z. Liu, S. Wermter, Spatial relation learning in complementary scenarios with deep neural networks, Frontiers in Neurorobotics, 2022.
Compositional Generalization
- C. Volquardsen, J. H. Lee, C. Weber, and S. Wermter, More Diverse Training, Better Compositionality! Evidence from Multimodal Language Learning, ICANN 2022.
- A. Eisermann, J. H. Lee, C. Weber, S. Wermter, Generalization in Multimodal Language Learning from Simulation, IJCNN 2021.
Continual Learning
- K. Ahrens, H. H. Lehmann, J. H. Lee, S. Wermter, Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models, Transactions on Machine Learning Research (TMLR), 2024.
- K. Ahrens, L. Bengtson, J. H. Lee , and S. Wermter, Visually Grounded Continual Language Learning with Selective Specialization, EMNLP Findings 2023.
Other Machine Learning Topics
- M. Li, X. Zhao, J. H. Lee, C. Weber, S. Wermter, Internally Rewarded Reinforcement Learning, ICML 2023.
- J. H. Lee, J. Camacho-Collados, L. Espinosa Anke, S. Schockaert, Capturing Word Order in Averaging Based Sentence Embeddings, ECAI 2020.
- S. Kong, J. Bai, J. H. Lee, D. Chen, A. Allyn, M. Stuart, M. Pinsky, K. Mills, C. Gomes, Deep Hurdle Networks for Zero-Inflated Multi-Target Regression: Application to Multiple Species Abundance Estimation, IJCAI 2020.
Spatio-Temporal Reasoning and Learning
- J. H. Lee, M. Sioutis, K. Ahrens, M. Alirezaie, M. Kerzel, S. Wermter, Neuro-Symbolic Spatio-Temporal Reasoning, in Compendium of Neurosymbolic Artificial Intelligence, IOS Press, 2023, pp. 410–429.
- J. H. Lee, M. Kerzel, K. Ahrens, C. Weber, S. Wermter, What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning, IJCAI 2022.
- J. H. Lee et al. Spatial relation learning in complementary scenarios with deep neural networks, Frontiers in Neurorobotics, 2022.
- S. Kong, J. H. Lee, S. Li, Multiagent Simple Temporal Problem: The Arc-Consistency Approach AAAI 2018.
- F. Dylla, J. H. Lee, T. Mossakowski, T. Schneider, A.V. Delden, J.V.D. Ven, D. Wolter, A Survey of Qualitative Spatial and Temporal Calculi: Algebraic and Computational Properties, ACM Computing Surveys, 2017.
- J. H. Lee, S. Li, Z. Long, M. Sioutis, On Redundancy in Simple Temporal Networks, ECAI 2016.
- D. Wolter, J. H. Lee, Connecting Qualitative Spatial and Temporal Representations by Propositional Closure, IJCAI 2016.
- X. Ge, J. H. Lee, J. Renz, P. Zhang, Trend-Based Prediction of Spatial Change, IJCAI 2016.
- S. Schockaert, J. H. Lee, Qualitative Reasoning About Directions in Semantic Spaces, IJCAI 2015.
- P. Zhang, J. H. Lee, J. Renz, From Raw Sensor Data to Detailed Spatial Knowledge, IJCAI 2015.
- J. H. Lee, The Complexity of Reasoning with Relative Directions, ECAI 2014.
- J. H. Lee, J. Renz, D. Wolter, StarVars: Effective Reasoning About Relative Directions, IJCAI 2013.
- D. Wolter, J. H. Lee, Qualitative reasoning with directional relations, Artificial Intelligence, 2010.
Multiagent Systems
- S. Kong, J. H. Lee, S. Li, Multiagent Simple Temporal Problem: The Arc-Consistency Approach AAAI 2018.
- S. Kong, J. H. Lee, S. Li, A new distributed algorithm for efficient generalized arc-consistency propagation, Autonomous Agent Multi-Agent Systems 32, 2018.
- S. Kong, J. H. Lee, S. Li, A Deterministic Distributed Algorithm for Reasoning with Connected Row-Convex Constraints, AAMAS 2017.
Recent Teaching Activities
- Lecture on “Large Language Models”, University of Hamburg (2024), with Xufeng Zhao [Slides]
- Lecture on “Transformers”, University of Hamburg (2024) [Slides]
- Organizer, LLM+XAI Reading Group, Knowledge Technology, University of Hamburg (2024)
- Supervisor, Neural Networks Seminar, University of Hamburg (2024)
- Supervisor, Bio-inspired Artificial Intelligence Seminar, University of Hamburg (2024)
- Organizer, WISDUM meeting, Knowledge Technology, University of Hamburg (2024)
Thesis Supervision
- Concept-Based Explanation for Continual Learning Models, Priscilla Cortese, MSc (2024)
- LeoLM: Evaluating and Enhancing German Language Proficiency in Pretrained Large Language Models, Björn Plüster, MSc (2024), now co-founder of ellamind; LeoLM is the first comprehensive German LLM suite developed by Björn.
- Continual Learning for Language-Conditioned Robotic Manipulation, Lennart Bengtson, MSc (2023)
- Multivariate Normal Methods in Pre-trained Models for Continual Learning, Hans Hergen Lehmann, BSc (2023), improved version accepted in the TMLR journal
- Generalization of Transformer-Based Models on Visual Question Answering Tasks, Frederic Voigt, MSc (2023)
- Improving Compositional Generalization By Learning Concepts Individually, Ramtin Nouri, MSc (2023)
more...
- Learning Concepts a Developmental Lifelong Learning Approach to Visual Question Answering, Ramin Farkhondeh, BSc (2022)
- Benchmarking Faithfulness: Towards Accurate Natural Language Explanations in Vision-Language Tasks, Jakob Ambsdorf, MSc (2022), now PhD student at University of Copenhagen
- Tackling The Binding Problem And Compositional Generalization In Multimodal Language Learning, Caspar Volquardsen, BSc (2021), ICANN 2022
- Learning Bidirectional Translation Between Robot Actions and Linguistic Descriptions, Markus Heidrich, BSc (2021)
- Using the Reformer for Efficient Summarization, Yannick Wehr, BSc (2020)
- Generalization in Multi-Modal Language Learning from Simulation, Aaron Eisermann, BSc (2020), IJCNN 2021
- Commonsense Validation and Explanation, Christian Rahe, BSc (2020)
Reading Group
Time: Every Wednesday 2pm.
In this reading group, each person presents a paper on a topic in LLM or XAI that they want to learn anyway (since teaching is one of the best ways to learn the topic). There is no need to prepare slides; you can simply show the paper on the screen. The meetings take place via Zoom.
Upcoming presentations
- 15.01.25: Language Models Represent Space and Time to be presented by Jae
Past presentations
- 18.12.24: Deploying and Evaluating LLMs to Program Service Mobile Robots presented by Hassan
- 11.12.24: Interpreting Emergent Planning in Modelfree Reinforcement Learning presented by Wenhao
- 04.12.24: Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making) presented by Xiaowen
- 27.11.24: Towards More Faithful Natural Language Explanation Using Multi-Level presented by Sergio
- 20.11.24: Sparse Crosscoders for Cross-Layer Features and Model Diffing presented by Jeremy
- 30.10.24: GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models presented by Hassan
- 23.10.24: Robotic Control via Embodied Chain-of-Thought Reasoning presented by Wenhao
- 16.10.24: AffordanceLLM: Grounding Affordance from Vision Language Models presented by Xiaowen
- 09.10.24: Interpreting Attention Layer Outputs with Sparse Autoencoders presented by Sergio
more...
- 25.09.24: A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity presented by Xintong
- 18.09.24: No meeting
- 11.09.24: Understanding Social Reasoning in Language Models with Language Models presented by Burak
- 04.09.24: Do Large Language Models Latently Perform Multi-Hop Reasoning? presented by Jae
- 21.08.24: A Multimodal Automated Interpretability Agent presented by Wenhao
- 14.08.24: A Concept-Based Explainability Framework for Large Multimodal Models presented by Jae
- 07.08.24: Advancing LLM Reasoning Generalists with Preference Trees presented by Hassan
- 31.07.24: STaR: Self-Taught Reasoner presented by Imran
- 24.07.24: Neuron to Graph: Interpreting Language Model Neurons at Scale presented by Sergio
- 17.07.24: Detecting hallucinations in large language models using semantic entropy to be presented by Xiaowen
- 10.07.24: Identifying Linear Relational Concepts in Large Language Models presented by Jae
- 26.06.24: What does the Knowledge Neuron Thesis Have to do with Knowledge presented by Xintong
- 19.06.24: LLMs with Chain-of-Thought Are Non-Causal Reasoners presented by Wenhao
- 12.06.24: Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels presented by Xufeng
- 05.06.24: A Survey on Evaluation of Large Language Models presented by Hassan
- 29.05.24: Inference-Time Intervention: Eliciting Truthful Answers from a Language Model presented by Sergio
- 22.05.24: Finding Neurons in a Haystack: Case Studies with Sparse Probing presented by Imran
- 15.05.24: Towards Uncovering How Large Language Model Works: An Explainability Perspective presented by Jae
Blog Posts
Projects
- Lifelong Multimodal Language Learning by Explaining and Exploiting Compositional Knowledge (2025–2029)
- DFG individual grant (732,610 EUR)
- University of Hamburg (PIs: Jae Hee Lee and Stefan Wermter)
- Topics: Multimodal Language Learning, XAI, Neuro-Symbolic AI
- Neurocognitive Models of Crossmodal Language Learning (2020–2025)
- University of Hamburg (PIs: Stefan Wermter and Cornelius Weber)
- Topics: Multimodal Language Learning, XAI, Neuro-Symbolic AI
- Formal Lexically Informed Logics for Searching the Web (2018–2020)
- Cardiff University (PI: Steven Schockaert)
- Topic: Natural Language Processing
- Feodor Lynen Research Fellowship by the Humboldt Foundation (2016–2017)
- University of Technology Sydney (Host: Sanjiang Li)
- Topics: Spatio-Temporal Reasoning and Multiagent Systems
- Artificial Intelligence Meets Wireless Sensor Networks (2015)
- The Australian National University (PI: Jochen Renz)
- Topic: Spatio-Temporal Reasoning and Learning
- Reasoning about Paths, Shapes, and Configuration (2009–2014)
- University of Bremen (PIs: Christian Freksa† and Diedrich Wolter)
- Topic: Spatio-Temporal Reasoning
- The International Research Training Group “Semantic Integration of Geospatial Information” (2010-2013)
- University of Bremen
- Topic: Spatio-Temporal Reasoning
Education
- Dr. rer. nat. in Computer Science, University of Bremen (2009–2013)
- Visiting PhD student, North Carolina State University (2012)
- Visiting PhD student, University at Buffalo (2011)
- Diplom in Mathematics, University of Bremen (2003–2009)
- DAAD exchange student, Seoul National University (2008)