Hello.

I'm Leonard Papenmeier, a postdoctoral researcher in Machine Learning - currently at the University of Münster, Germany. From September 2020 to June 2025, I obtained my PhD from the University of Lund, Sweden and the Wallenberg AI, Autonomous Systems and Software Program (WASP) under the supervision of Luigi Nardi. Before that, I completed my Master's degree in Applied Computer Science at the Ruhr-University Bochum, Germany and obtained a Bachelor's degree in Software Engineering from the University of Applied Sciences in Dortmund, Germany.

I'm working on the optimization of black-box functions with Bayesian Optimization with a focus on high-dimensional functions with hundreds of input parameters. I'm interested in exploring the limits of high-dimensional Bayesian optimization and developing scalable and reliable algorithms for optimizing a broad set of high-dimensional problems.

Research Output.

Selected papers and projects across Bayesian optimization, exploration, and benchmarking.

Preprint 2026 Preprint

SMOG: Scalable Meta-Learning for Multi-Objective Bayesian Optimization

Leonard Papenmeier, Petru Tighineanu

SMOG is a scalable meta-learning model for multi-objective Bayesian optimization that learns correlations between objectives with a structured joint multi-output Gaussian process, caching meta-task fits and integrating cleanly with standard acquisition functions.

Meta-learning Multi-objective BO Gaussian processes Transfer learning
ICML Workshop 2025 Accepted

Bencher – Simple and Reproducible Benchmarking for Black-Box Optimization

Leonard Papenmeier, Luigi Nardi

A modular benchmarking framework that isolates benchmarks in containerized environments and exposes them via a lightweight RPC interface to avoid dependency conflicts.

Benchmarking Reproducibility Black-box optimization
ICML 2025 Accepted

A Unified Framework for Entropy Search and Expected Improvement in Bayesian Optimization

Nuojin Cheng*, Leonard Papenmeier*, Stephen Becker, Luigi Nardi

Shows Expected Improvement as a variational inference view of Max-value Entropy Search and proposes VES-Gamma to blend information-theoretic and EI-style strategies with strong empirical results.

Entropy search Expected improvement Bayesian optimization
UAI 2025 Accepted

Exploring Exploration in Bayesian Optimization

Leonard Papenmeier*, Nuojin Cheng*, Stephen Becker, Luigi Nardi

Introduces observation traveling salesman distance and observation entropy to quantify exploration, revealing how acquisition function behavior links to empirical performance and guiding principled design.

Exploration metrics Acquisition functions Bayesian optimization
ICML 2025 Accepted

Understanding High-Dimensional Bayesian Optimization

Leonard Papenmeier, Matthias Poloczek, Luigi Nardi

We identify why simple Bayesian optimization methods can work in hundreds of dimensions, showing the role of vanishing gradients from Gaussian process initialization and proposing MSR for state-of-the-art performance.

Bayesian optimization High-dimensional search Gaussian processes
Preprint 2023 Preprint

High-Dimensional Bayesian Optimization with Group Testing

Erik Hellsten*, Carl Hvarfner*, Leonard Papenmeier*, Luigi Nardi

Applies group testing ideas to identify active dimensions before optimizing, improving efficiency on synthetic and real-world tasks while surfacing influential parameters.

Group testing Active dimension discovery Bayesian optimization
NeurIPS 2023 Published

Bounce – Reliable High-Dimensional Bayesian Optimization for Combinatorial and Mixed Spaces

Leonard Papenmeier, Luigi Nardi, Matthias Poloczek

Introduces nested embeddings for mixed and combinatorial variables to deliver reliable optimization performance across a wide range of high-dimensional benchmarks.

Combinatorial spaces Mixed-variable BO Reliability
NeurIPS 2022 Published

Increasing the Scope as You Learn – Adaptive Bayesian Optimization in Nested Subspaces

Leonard Papenmeier, Luigi Nardi, Matthias Poloczek

Starts optimization in sparse low-dimensional embeddings and expands them while reusing observations, enabling efficient BO in high-dimensional settings.

Sparse embeddings High-dimensional BO Adaptive search