Alert button
Picture for Marius Lindauer

Marius Lindauer

Alert button

Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning

Sep 13, 2023
Joseph Giovanelli, Alexander Tornede, Tanja Tornede, Marius Lindauer

Figure 1 for Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning
Figure 2 for Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning
Figure 3 for Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning
Figure 4 for Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning

Hyperparameter optimization (HPO) is important to leverage the full potential of machine learning (ML). In practice, users are often interested in multi-objective (MO) problems, i.e., optimizing potentially conflicting objectives, like accuracy and energy consumption. To tackle this, the vast majority of MO-ML algorithms return a Pareto front of non-dominated machine learning models to the user. Optimizing the hyperparameters of such algorithms is non-trivial as evaluating a hyperparameter configuration entails evaluating the quality of the resulting Pareto front. In literature, there are known indicators that assess the quality of a Pareto front (e.g., hypervolume, R2) by quantifying different properties (e.g., volume, proximity to a reference point). However, choosing the indicator that leads to the desired Pareto front might be a hard task for a user. In this paper, we propose a human-centered interactive HPO approach tailored towards multi-objective ML leveraging preference learning to extract desiderata from users that guide the optimization. Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator. Concretely, we leverage pairwise comparisons of distinct Pareto fronts to learn such an appropriate quality indicator. Then, we optimize the hyperparameters of the underlying MO-ML algorithm towards this learned indicator using a state-of-the-art HPO approach. In an experimental study targeting the environmental impact of ML, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator pre-selected by the user, and performs comparable in the case of an advanced user knowing which indicator to pick.

Viaarxiv icon

Self-Adjusting Weighted Expected Improvement for Bayesian Optimization

Jun 30, 2023
Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr, Marius Lindauer

Figure 1 for Self-Adjusting Weighted Expected Improvement for Bayesian Optimization
Figure 2 for Self-Adjusting Weighted Expected Improvement for Bayesian Optimization
Figure 3 for Self-Adjusting Weighted Expected Improvement for Bayesian Optimization
Figure 4 for Self-Adjusting Weighted Expected Improvement for Bayesian Optimization

Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets. The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF). Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited. In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions. We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO. On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure. The suitability of our method also transfers to HPOBench. With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand.

* AutoML Conference 2023 
Viaarxiv icon

AutoML in Heavily Constrained Applications

Jun 29, 2023
Felix Neutatz, Marius Lindauer, Ziawasch Abedjan

Figure 1 for AutoML in Heavily Constrained Applications
Figure 2 for AutoML in Heavily Constrained Applications
Figure 3 for AutoML in Heavily Constrained Applications
Figure 4 for AutoML in Heavily Constrained Applications

Optimizing a machine learning pipeline for a task at hand requires careful configuration of various hyperparameters, typically supported by an AutoML system that optimizes the hyperparameters for the given training dataset. Yet, depending on the AutoML system's own second-order meta-configuration, the performance of the AutoML process can vary significantly. Current AutoML systems cannot automatically adapt their own configuration to a specific use case. Further, they cannot compile user-defined application constraints on the effectiveness and efficiency of the pipeline and its generation. In this paper, we propose Caml, which uses meta-learning to automatically adapt its own AutoML parameters, such as the search strategy, the validation strategy, and the search space, for a task at hand. The dynamic AutoML strategy of Caml takes user-defined constraints into account and obtains constraint-satisfying pipelines with high predictive performance.

Viaarxiv icon

Structure in Reinforcement Learning: A Survey and Open Problems

Jun 28, 2023
Aditya Mohan, Amy Zhang, Marius Lindauer

Reinforcement Learning (RL), bolstered by the expressive capabilities of Deep Neural Networks (DNNs) for function approximation, has demonstrated considerable success in numerous applications. However, its practicality in addressing a wide range of real-world scenarios, characterized by diverse and unpredictable dynamics, noisy signals, and large state and action spaces, remains limited. This limitation stems from issues such as poor data efficiency, limited generalization capabilities, a lack of safety guarantees, and the absence of interpretability, among other factors. To overcome these challenges and improve performance across these crucial metrics, one promising avenue is to incorporate additional structural information about the problem into the RL learning process. Various sub-fields of RL have proposed methods for incorporating such inductive biases. We amalgamate these diverse methodologies under a unified framework, shedding light on the role of structure in the learning problem, and classify these methods into distinct patterns of incorporating structure. By leveraging this comprehensive framework, we provide valuable insights into the challenges associated with structured RL and lay the groundwork for a design pattern perspective on RL research. This novel perspective paves the way for future advancements and aids in the development of more effective and efficient RL algorithms that can potentially handle real-world scenarios better.

Viaarxiv icon

PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning

Jun 21, 2023
Neeratyoy Mallik, Edward Bergman, Carl Hvarfner, Danny Stoll, Maciej Janowski, Marius Lindauer, Luigi Nardi, Frank Hutter

Figure 1 for PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning
Figure 2 for PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning
Figure 3 for PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning
Figure 4 for PriorBand: Practical Hyperparameter Optimization in the Age of Deep Learning

Hyperparameters of Deep Learning (DL) pipelines are crucial for their downstream performance. While a large number of methods for Hyperparameter Optimization (HPO) have been developed, their incurred costs are often untenable for modern DL. Consequently, manual experimentation is still the most prevalent approach to optimize hyperparameters, relying on the researcher's intuition, domain knowledge, and cheap preliminary explorations. To resolve this misalignment between HPO algorithms and DL researchers, we propose PriorBand, an HPO algorithm tailored to DL, able to utilize both expert beliefs and cheap proxy tasks. Empirically, we demonstrate PriorBand's efficiency across a range of DL benchmarks and show its gains under informative expert input and robustness against poor expert beliefs

Viaarxiv icon

Automated Machine Learning for Remaining Useful Life Predictions

Jun 21, 2023
Marc-André Zöller, Fabian Mauthe, Peter Zeiler, Marius Lindauer, Marco F. Huber

Figure 1 for Automated Machine Learning for Remaining Useful Life Predictions
Figure 2 for Automated Machine Learning for Remaining Useful Life Predictions
Figure 3 for Automated Machine Learning for Remaining Useful Life Predictions
Figure 4 for Automated Machine Learning for Remaining Useful Life Predictions

Being able to predict the remaining useful life (RUL) of an engineering system is an important task in prognostics and health management. Recently, data-driven approaches to RUL predictions are becoming prevalent over model-based approaches since no underlying physical knowledge of the engineering system is required. Yet, this just replaces required expertise of the underlying physics with machine learning (ML) expertise, which is often also not available. Automated machine learning (AutoML) promises to build end-to-end ML pipelines automatically enabling domain experts without ML expertise to create their own models. This paper introduces AutoRUL, an AutoML-driven end-to-end approach for automatic RUL predictions. AutoRUL combines fine-tuned standard regression methods to an ensemble with high predictive power. By evaluating the proposed method on eight real-world and synthetic datasets against state-of-the-art hand-crafted models, we show that AutoML provides a viable alternative to hand-crafted data-driven RUL predictions. Consequently, creating RUL predictions can be made more accessible for domain experts using AutoML by eliminating ML expertise from data-driven model construction.

* Manuscript accepted at IEEE SMC 2023 
Viaarxiv icon

AutoML in the Age of Large Language Models: Current Challenges, Future Opportunities and Risks

Jun 13, 2023
Alexander Tornede, Difan Deng, Theresa Eimer, Joseph Giovanelli, Aditya Mohan, Tim Ruhkopf, Sarah Segel, Daphne Theodorakopoulos, Tanja Tornede, Henning Wachsmuth, Marius Lindauer

Figure 1 for AutoML in the Age of Large Language Models: Current Challenges, Future Opportunities and Risks
Figure 2 for AutoML in the Age of Large Language Models: Current Challenges, Future Opportunities and Risks
Figure 3 for AutoML in the Age of Large Language Models: Current Challenges, Future Opportunities and Risks

The fields of both Natural Language Processing (NLP) and Automated Machine Learning (AutoML) have achieved remarkable results over the past years. In NLP, especially Large Language Models (LLMs) have experienced a rapid series of breakthroughs very recently. We envision that the two fields can radically push the boundaries of each other through tight integration. To showcase this vision, we explore the potential of a symbiotic relationship between AutoML and LLMs, shedding light on how they can benefit each other. In particular, we investigate both the opportunities to enhance AutoML approaches with LLMs from different perspectives and the challenges of leveraging AutoML to further improve LLMs. To this end, we survey existing work, and we critically assess risks. We strongly believe that the integration of the two fields has the potential to disrupt both fields, NLP and AutoML. By highlighting conceivable synergies, but also risks, we aim to foster further exploration at the intersection of AutoML and LLMs.

Viaarxiv icon

Hyperparameters in Reinforcement Learning and How To Tune Them

Jun 02, 2023
Theresa Eimer, Marius Lindauer, Roberta Raileanu

Figure 1 for Hyperparameters in Reinforcement Learning and How To Tune Them
Figure 2 for Hyperparameters in Reinforcement Learning and How To Tune Them
Figure 3 for Hyperparameters in Reinforcement Learning and How To Tune Them
Figure 4 for Hyperparameters in Reinforcement Learning and How To Tune Them

In order to improve reproducibility, deep reinforcement learning (RL) has been adopting better scientific practices such as standardized evaluation metrics and reporting. However, the process of hyperparameter optimization still varies widely across papers, which makes it challenging to compare RL algorithms fairly. In this paper, we show that hyperparameter choices in RL can significantly affect the agent's final performance and sample efficiency, and that the hyperparameter landscape can strongly depend on the tuning seed which may lead to overfitting. We therefore propose adopting established best practices from AutoML, such as the separation of tuning and testing seeds, as well as principled hyperparameter optimization (HPO) across a broad search space. We support this by comparing multiple state-of-the-art HPO tools on a range of RL algorithms and environments to their hand-tuned counterparts, demonstrating that HPO approaches often have higher performance and lower compute overhead. As a result of our findings, we recommend a set of best practices for the RL community, which should result in stronger empirical results with fewer computational costs, better reproducibility, and thus faster progress. In order to encourage the adoption of these practices, we provide plug-and-play implementations of the tuning algorithms used in this paper at https://github.com/facebookresearch/how-to-autorl.

Viaarxiv icon