Alert button
Picture for Andrew Gordon Wilson

Andrew Gordon Wilson

Alert button

SysML: The New Frontier of Machine Learning Systems

Add code
Bookmark button
Alert button
Mar 29, 2019
Alexander Ratner, Dan Alistarh, Gustavo Alonso, Peter Bailis, Sarah Bird, Nicholas Carlini, Bryan Catanzaro, Eric Chung, Bill Dally, Jeff Dean, Inderjit S. Dhillon, Alexandros Dimakis, Pradeep Dubey, Charles Elkan, Grigori Fursin, Gregory R. Ganger, Lise Getoor, Phillip B. Gibbons, Garth A. Gibson, Joseph E. Gonzalez, Justin Gottschlich, Song Han, Kim Hazelwood, Furong Huang, Martin Jaggi, Kevin Jamieson, Michael I. Jordan, Gauri Joshi, Rania Khalaf, Jason Knight, Jakub Konečný, Tim Kraska, Arun Kumar, Anastasios Kyrillidis, Jing Li, Samuel Madden, H. Brendan McMahan, Erik Meijer, Ioannis Mitliagkas, Rajat Monga, Derek Murray, Dimitris Papailiopoulos, Gennady Pekhimenko, Theodoros Rekatsinas, Afshin Rostamizadeh, Christopher Ré, Christopher De Sa, Hanie Sedghi, Siddhartha Sen, Virginia Smith, Alex Smola, Dawn Song, Evan Sparks, Ion Stoica, Vivienne Sze, Madeleine Udell, Joaquin Vanschoren, Shivaram Venkataraman, Rashmi Vinayak, Markus Weimer, Andrew Gordon Wilson, Eric Xing, Matei Zaharia, Ce Zhang, Ameet Talwalkar

Viaarxiv icon

Exact Gaussian Processes on a Million Data Points

Add code
Bookmark button
Alert button
Mar 19, 2019
Ke Alexander Wang, Geoff Pleiss, Jacob R. Gardner, Stephen Tyree, Kilian Q. Weinberger, Andrew Gordon Wilson

Figure 1 for Exact Gaussian Processes on a Million Data Points
Figure 2 for Exact Gaussian Processes on a Million Data Points
Figure 3 for Exact Gaussian Processes on a Million Data Points
Figure 4 for Exact Gaussian Processes on a Million Data Points
Viaarxiv icon

Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning

Add code
Bookmark button
Alert button
Mar 12, 2019
Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, Andrew Gordon Wilson

Figure 1 for Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning
Figure 2 for Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning
Viaarxiv icon

Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning

Add code
Bookmark button
Alert button
Feb 11, 2019
Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, Andrew Gordon Wilson

Figure 1 for Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning
Figure 2 for Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning
Figure 3 for Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning
Figure 4 for Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning
Viaarxiv icon

A Simple Baseline for Bayesian Uncertainty in Deep Learning

Add code
Bookmark button
Alert button
Feb 07, 2019
Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, Andrew Gordon Wilson

Figure 1 for A Simple Baseline for Bayesian Uncertainty in Deep Learning
Figure 2 for A Simple Baseline for Bayesian Uncertainty in Deep Learning
Figure 3 for A Simple Baseline for Bayesian Uncertainty in Deep Learning
Figure 4 for A Simple Baseline for Bayesian Uncertainty in Deep Learning
Viaarxiv icon

Change Surfaces for Expressive Multidimensional Changepoints and Counterfactual Prediction

Add code
Bookmark button
Alert button
Oct 30, 2018
William Herlands, Daniel B. Neill, Hannes Nickisch, Andrew Gordon Wilson

Figure 1 for Change Surfaces for Expressive Multidimensional Changepoints and Counterfactual Prediction
Figure 2 for Change Surfaces for Expressive Multidimensional Changepoints and Counterfactual Prediction
Figure 3 for Change Surfaces for Expressive Multidimensional Changepoints and Counterfactual Prediction
Figure 4 for Change Surfaces for Expressive Multidimensional Changepoints and Counterfactual Prediction
Viaarxiv icon

Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs

Add code
Bookmark button
Alert button
Oct 30, 2018
Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson

Figure 1 for Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
Figure 2 for Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
Figure 3 for Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
Figure 4 for Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
Viaarxiv icon

Scaling Gaussian Process Regression with Derivatives

Add code
Bookmark button
Alert button
Oct 29, 2018
David Eriksson, Kun Dong, Eric Hans Lee, David Bindel, Andrew Gordon Wilson

Figure 1 for Scaling Gaussian Process Regression with Derivatives
Figure 2 for Scaling Gaussian Process Regression with Derivatives
Figure 3 for Scaling Gaussian Process Regression with Derivatives
Figure 4 for Scaling Gaussian Process Regression with Derivatives
Viaarxiv icon

GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration

Add code
Bookmark button
Alert button
Oct 29, 2018
Jacob R. Gardner, Geoff Pleiss, David Bindel, Kilian Q. Weinberger, Andrew Gordon Wilson

Figure 1 for GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration
Figure 2 for GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration
Figure 3 for GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration
Figure 4 for GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration
Viaarxiv icon

Averaging Weights Leads to Wider Optima and Better Generalization

Add code
Bookmark button
Alert button
Aug 08, 2018
Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson

Figure 1 for Averaging Weights Leads to Wider Optima and Better Generalization
Figure 2 for Averaging Weights Leads to Wider Optima and Better Generalization
Figure 3 for Averaging Weights Leads to Wider Optima and Better Generalization
Figure 4 for Averaging Weights Leads to Wider Optima and Better Generalization
Viaarxiv icon