Alert button
Picture for Yee Whye Teh

Yee Whye Teh

Alert button

Neural Ensemble Search for Performant and Calibrated Predictions

Jun 15, 2020
Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris Holmes, Frank Hutter, Yee Whye Teh

Figure 1 for Neural Ensemble Search for Performant and Calibrated Predictions
Figure 2 for Neural Ensemble Search for Performant and Calibrated Predictions
Figure 3 for Neural Ensemble Search for Performant and Calibrated Predictions
Figure 4 for Neural Ensemble Search for Performant and Calibrated Predictions
Viaarxiv icon

Non-exchangeable feature allocation models with sublinear growth of the feature sizes

Mar 30, 2020
Giuseppe Di Benedetto, François Caron, Yee Whye Teh

Figure 1 for Non-exchangeable feature allocation models with sublinear growth of the feature sizes
Figure 2 for Non-exchangeable feature allocation models with sublinear growth of the feature sizes
Figure 3 for Non-exchangeable feature allocation models with sublinear growth of the feature sizes
Figure 4 for Non-exchangeable feature allocation models with sublinear growth of the feature sizes
Viaarxiv icon

Simple and Scalable Epistemic Uncertainty Estimation Using a Single Deep Deterministic Neural Network

Mar 04, 2020
Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal

Figure 1 for Simple and Scalable Epistemic Uncertainty Estimation Using a Single Deep Deterministic Neural Network
Figure 2 for Simple and Scalable Epistemic Uncertainty Estimation Using a Single Deep Deterministic Neural Network
Figure 3 for Simple and Scalable Epistemic Uncertainty Estimation Using a Single Deep Deterministic Neural Network
Figure 4 for Simple and Scalable Epistemic Uncertainty Estimation Using a Single Deep Deterministic Neural Network
Viaarxiv icon

Pruning untrained neural networks: Principles and Analysis

Feb 19, 2020
Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, Yee Whye Teh

Figure 1 for Pruning untrained neural networks: Principles and Analysis
Figure 2 for Pruning untrained neural networks: Principles and Analysis
Figure 3 for Pruning untrained neural networks: Principles and Analysis
Figure 4 for Pruning untrained neural networks: Principles and Analysis
Viaarxiv icon

Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise

Feb 13, 2020
Umut Şimşekli, Lingjiong Zhu, Yee Whye Teh, Mert Gürbüzbalaban

Figure 1 for Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise
Figure 2 for Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise
Figure 3 for Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise
Figure 4 for Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise
Viaarxiv icon

MetaFun: Meta-Learning with Iterative Functional Updates

Dec 05, 2019
Jin Xu, Jean-Francois Ton, Hyunjik Kim, Adam R. Kosiorek, Yee Whye Teh

Figure 1 for MetaFun: Meta-Learning with Iterative Functional Updates
Figure 2 for MetaFun: Meta-Learning with Iterative Functional Updates
Figure 3 for MetaFun: Meta-Learning with Iterative Functional Updates
Figure 4 for MetaFun: Meta-Learning with Iterative Functional Updates
Viaarxiv icon

Amortized Rejection Sampling in Universal Probabilistic Programming

Nov 30, 2019
Saeid Naderiparizi, Adam Ścibior, Andreas Munk, Mehrdad Ghadiri, Atılım Güneş Baydin, Bradley Gram-Hansen, Christian Schroeder de Witt, Robert Zinkov, Philip H. S. Torr, Tom Rainforth, Yee Whye Teh, Frank Wood

Figure 1 for Amortized Rejection Sampling in Universal Probabilistic Programming
Figure 2 for Amortized Rejection Sampling in Universal Probabilistic Programming
Figure 3 for Amortized Rejection Sampling in Universal Probabilistic Programming
Figure 4 for Amortized Rejection Sampling in Universal Probabilistic Programming
Viaarxiv icon

A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments

Nov 01, 2019
Adam Foster, Martin Jankowiak, Matthew O'Meara, Yee Whye Teh, Tom Rainforth

Figure 1 for A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments
Figure 2 for A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments
Figure 3 for A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments
Figure 4 for A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments
Viaarxiv icon

Continual Unsupervised Representation Learning

Oct 31, 2019
Dushyant Rao, Francesco Visin, Andrei A. Rusu, Yee Whye Teh, Razvan Pascanu, Raia Hadsell

Figure 1 for Continual Unsupervised Representation Learning
Figure 2 for Continual Unsupervised Representation Learning
Figure 3 for Continual Unsupervised Representation Learning
Figure 4 for Continual Unsupervised Representation Learning
Viaarxiv icon

Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support

Oct 29, 2019
Yuan Zhou, Hongseok Yang, Yee Whye Teh, Tom Rainforth

Figure 1 for Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support
Figure 2 for Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support
Figure 3 for Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support
Figure 4 for Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support
Viaarxiv icon