Picture for Frank Hutter

Frank Hutter

TAU, LISN

Why Do Machine Learning Practitioners Still Use Manual Tuning? A Qualitative Study

Add code
Mar 03, 2022
Figure 1 for Why Do Machine Learning Practitioners Still Use Manual Tuning? A Qualitative Study
Figure 2 for Why Do Machine Learning Practitioners Still Use Manual Tuning? A Qualitative Study
Figure 3 for Why Do Machine Learning Practitioners Still Use Manual Tuning? A Qualitative Study
Figure 4 for Why Do Machine Learning Practitioners Still Use Manual Tuning? A Qualitative Study
Viaarxiv icon

Neural Architecture Search for Dense Prediction Tasks in Computer Vision

Add code
Feb 15, 2022
Viaarxiv icon

NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy

Add code
Feb 11, 2022
Figure 1 for NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy
Figure 2 for NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy
Figure 3 for NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy
Figure 4 for NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy
Viaarxiv icon

Contextualize Me -- The Case for Context in Reinforcement Learning

Add code
Feb 09, 2022
Figure 1 for Contextualize Me -- The Case for Context in Reinforcement Learning
Figure 2 for Contextualize Me -- The Case for Context in Reinforcement Learning
Figure 3 for Contextualize Me -- The Case for Context in Reinforcement Learning
Figure 4 for Contextualize Me -- The Case for Context in Reinforcement Learning
Viaarxiv icon

Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration

Add code
Feb 07, 2022
Figure 1 for Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration
Figure 2 for Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration
Figure 3 for Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration
Figure 4 for Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm Configuration
Viaarxiv icon

Learning Synthetic Environments and Reward Networks for Reinforcement Learning

Add code
Feb 06, 2022
Figure 1 for Learning Synthetic Environments and Reward Networks for Reinforcement Learning
Figure 2 for Learning Synthetic Environments and Reward Networks for Reinforcement Learning
Figure 3 for Learning Synthetic Environments and Reward Networks for Reinforcement Learning
Figure 4 for Learning Synthetic Environments and Reward Networks for Reinforcement Learning
Viaarxiv icon

Transformers Can Do Bayesian Inference

Add code
Jan 25, 2022
Figure 1 for Transformers Can Do Bayesian Inference
Figure 2 for Transformers Can Do Bayesian Inference
Figure 3 for Transformers Can Do Bayesian Inference
Figure 4 for Transformers Can Do Bayesian Inference
Viaarxiv icon

Automated Reinforcement Learning (AutoRL): A Survey and Open Problems

Add code
Jan 11, 2022
Figure 1 for Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Figure 2 for Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Figure 3 for Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Figure 4 for Automated Reinforcement Learning (AutoRL): A Survey and Open Problems
Viaarxiv icon

Winning solutions and post-challenge analyses of the ChaLearn AutoDL challenge 2019

Add code
Jan 11, 2022
Figure 1 for Winning solutions and post-challenge analyses of the ChaLearn AutoDL challenge 2019
Figure 2 for Winning solutions and post-challenge analyses of the ChaLearn AutoDL challenge 2019
Figure 3 for Winning solutions and post-challenge analyses of the ChaLearn AutoDL challenge 2019
Figure 4 for Winning solutions and post-challenge analyses of the ChaLearn AutoDL challenge 2019
Viaarxiv icon

NAS-Bench-x11 and the Power of Learning Curves

Add code
Nov 05, 2021
Figure 1 for NAS-Bench-x11 and the Power of Learning Curves
Figure 2 for NAS-Bench-x11 and the Power of Learning Curves
Viaarxiv icon