Picture for Willie Neiswanger

Willie Neiswanger

Department of Computer Science, Stanford University

AutoML for Climate Change: A Call to Action

Add code
Oct 07, 2022
Figure 1 for AutoML for Climate Change: A Call to Action
Figure 2 for AutoML for Climate Change: A Call to Action
Figure 3 for AutoML for Climate Change: A Call to Action
Figure 4 for AutoML for Climate Change: A Call to Action
Viaarxiv icon

Exploration via Planning for Information about the Optimal Trajectory

Add code
Oct 06, 2022
Figure 1 for Exploration via Planning for Information about the Optimal Trajectory
Figure 2 for Exploration via Planning for Information about the Optimal Trajectory
Figure 3 for Exploration via Planning for Information about the Optimal Trajectory
Figure 4 for Exploration via Planning for Information about the Optimal Trajectory
Viaarxiv icon

Generalizing Bayesian Optimization with Decision-theoretic Entropies

Add code
Oct 04, 2022
Figure 1 for Generalizing Bayesian Optimization with Decision-theoretic Entropies
Figure 2 for Generalizing Bayesian Optimization with Decision-theoretic Entropies
Figure 3 for Generalizing Bayesian Optimization with Decision-theoretic Entropies
Figure 4 for Generalizing Bayesian Optimization with Decision-theoretic Entropies
Viaarxiv icon

Bayesian Algorithm Execution for Tuning Particle Accelerator Emittance with Partial Measurements

Add code
Sep 10, 2022
Figure 1 for Bayesian Algorithm Execution for Tuning Particle Accelerator Emittance with Partial Measurements
Figure 2 for Bayesian Algorithm Execution for Tuning Particle Accelerator Emittance with Partial Measurements
Figure 3 for Bayesian Algorithm Execution for Tuning Particle Accelerator Emittance with Partial Measurements
Figure 4 for Bayesian Algorithm Execution for Tuning Particle Accelerator Emittance with Partial Measurements
Viaarxiv icon

Modular Conformal Calibration

Add code
Jul 05, 2022
Figure 1 for Modular Conformal Calibration
Figure 2 for Modular Conformal Calibration
Figure 3 for Modular Conformal Calibration
Figure 4 for Modular Conformal Calibration
Viaarxiv icon

Betty: An Automatic Differentiation Library for Multilevel Optimization

Add code
Jul 05, 2022
Figure 1 for Betty: An Automatic Differentiation Library for Multilevel Optimization
Figure 2 for Betty: An Automatic Differentiation Library for Multilevel Optimization
Figure 3 for Betty: An Automatic Differentiation Library for Multilevel Optimization
Figure 4 for Betty: An Automatic Differentiation Library for Multilevel Optimization
Viaarxiv icon

A General Recipe for Likelihood-free Bayesian Optimization

Add code
Jun 27, 2022
Figure 1 for A General Recipe for Likelihood-free Bayesian Optimization
Figure 2 for A General Recipe for Likelihood-free Bayesian Optimization
Figure 3 for A General Recipe for Likelihood-free Bayesian Optimization
Figure 4 for A General Recipe for Likelihood-free Bayesian Optimization
Viaarxiv icon

Generative Modeling Helps Weak Supervision (and Vice Versa)

Add code
Mar 22, 2022
Figure 1 for Generative Modeling Helps Weak Supervision (and Vice Versa)
Figure 2 for Generative Modeling Helps Weak Supervision (and Vice Versa)
Figure 3 for Generative Modeling Helps Weak Supervision (and Vice Versa)
Figure 4 for Generative Modeling Helps Weak Supervision (and Vice Versa)
Viaarxiv icon

IS-COUNT: Large-scale Object Counting from Satellite Images with Covariate-based Importance Sampling

Add code
Dec 16, 2021
Figure 1 for IS-COUNT: Large-scale Object Counting from Satellite Images with Covariate-based Importance Sampling
Figure 2 for IS-COUNT: Large-scale Object Counting from Satellite Images with Covariate-based Importance Sampling
Figure 3 for IS-COUNT: Large-scale Object Counting from Satellite Images with Covariate-based Importance Sampling
Figure 4 for IS-COUNT: Large-scale Object Counting from Satellite Images with Covariate-based Importance Sampling
Viaarxiv icon

An Experimental Design Perspective on Model-Based Reinforcement Learning

Add code
Dec 09, 2021
Figure 1 for An Experimental Design Perspective on Model-Based Reinforcement Learning
Figure 2 for An Experimental Design Perspective on Model-Based Reinforcement Learning
Figure 3 for An Experimental Design Perspective on Model-Based Reinforcement Learning
Figure 4 for An Experimental Design Perspective on Model-Based Reinforcement Learning
Viaarxiv icon