Prof. Dr. Sören Laue

Professor for Machine Learning
Address
Office
Contact
Short Bio
I studied mathematics, physics, and computer science at the University of Leipzig and Saarland University. I received my Ph.D. in algorithmics and optimization from the Max Planck Institute for Informatics. After a research position in machine learning at Friedrich Schiller University Jena, I became Full Professor at the Technical University of Kaiserslautern, where I led the Algorithms for Machine Learning group. Since April 2023, I have been Professor of Machine Learning at the University of Hamburg.
My research focuses on the foundational analysis of machine learning, combining mathematical rigor with practical algorithm design. I work on learning models and optimization methods at a structural level, aiming to clarify their underlying principles and develop methods with both theoretical guarantees and strong empirical performance. I created the web service MatrixCalculus.org, which is used by more than 100,000 users annually.
Research
Research Profile
My research investigates the foundations of machine learning models and algorithms. I analyze how learning methods work internally, identifying which components are necessary, sufficient, or redundant, and under which conditions they succeed or fail. Rather than relying solely on empirical comparisons, I focus on structural and mathematical analysis to clarify mechanisms, limitations, and implicit biases induced by model and training design.
By dissecting models and optimization methods at a mathematical level, I aim to reduce complex method landscapes to a small number of essential mechanisms. This allows me to critically examine widely accepted claims, clarify implicit assumptions and limitations, and design simpler, more principled learning methods grounded in explicit theoretical insight.
Research Themes
Current research directions include:
- Foundational analysis of modern architectures, including transformers and diffusion models
- Analysis of optimization methods and training dynamics in machine learning
- Mathematical foundations of differentiation and matrix calculus underlying learning algorithms
- Principled design of learning and optimization methods with theoretical guarantees and strong empirical performance
Publications
Selected Publications. A full list can be found here.
- G. Li, X. Zhao, F. Wu, S. Laue.
Joint Design of Protein Surface and Structure Using a Diffusion Bridge Model, NeurIPS 2025. - M. Blacher, Ch. Staudt, J. Klaus, M. Wenig, N. Merk, A. Breuer, M. Engel, S. Laue, J. Giesen.
Einsum Benchmark: Enabling the Development of Next-Generation Tensor Execution Engines, NeurIPS 2024. - M. Blacher, J. Giesen, J. Klaus, Ch. Staudt, S. Laue, V. Leis.
Efficient and Portable Einstein Summation in SQL, SIGMOD 2023. - M. Mitterreiter, M. Koch, J. Giesen, S. Laue.
Why Capsule Neural Networks Do Not Scale: Challenging the Dynamic Parse-Tree Assumption, AAAI 2023. - J. Giesen, J. Klaus, S. Laue, N. Merk, and K. Wiedom.
Convexity Certificates from Hessians, NeurIPS 2022. - S. Laue, M. Blacher, and J. Giesen.
Optimization for Classical Machine Learning Problems on the GPU, AAAI 2022. - S. Laue, M. Mitterreiter, and J. Giesen.
A Simple and Efficient Tensor Calculus, AAAI 2020. - S. Laue, M. Mitterreiter, and J. Giesen.
GENO - GENeric Optimization for Classical Machine Learning, NeurIPS 2019. - J. Giesen, S. Laue, A. Loehne, and Ch. Schneider.
Using Benson’s Algorithm for Regularization Parameter Tracking, AAAI 2019. - S. Laue, M. Mitterreiter, and J. Giesen.
Computing Higher Order Derivatives for Matrix and Tensor Expressions, NeurIPS 2018. - K. Blechschmidt, J. Giesen, and S. Laue.
Tracking of Approximate Solutions of Parameterized Optimization Problems over Multi-Dimensional (Hyper-)Parameter Domains, ICML 2015. - J. Giesen, S. Laue, and P. Wieschollek.
Robust and Efficient Kernel Hyperparameter Paths with Guarantees, ICML 2014. - J. Giesen, S. Laue, J. Mueller, and S. Swiercy.
Approximating Concavely Parameterized Optimization Problems, NIPS 2012. - S. Laue.
A Hybrid Algorithm for Convex Semidefinite Optimization, ICML 2012.