Alert button
Picture for Yarin Gal

Yarin Gal

Alert button

Can convolutional ResNets approximately preserve input distances? A frequency analysis perspective

Add code
Bookmark button
Alert button
Jun 04, 2021
Lewis Smith, Joost van Amersfoort, Haiwen Huang, Stephen Roberts, Yarin Gal

Figure 1 for Can convolutional ResNets approximately preserve input distances? A frequency analysis perspective
Figure 2 for Can convolutional ResNets approximately preserve input distances? A frequency analysis perspective
Figure 3 for Can convolutional ResNets approximately preserve input distances? A frequency analysis perspective
Figure 4 for Can convolutional ResNets approximately preserve input distances? A frequency analysis perspective
Viaarxiv icon

Physically-Consistent Generative Adversarial Networks for Coastal Flood Visualization

Add code
Bookmark button
Alert button
May 05, 2021
Björn Lütjens, Brandon Leshchinskiy, Christian Requena-Mesa, Farrukh Chishtie, Natalia Díaz-Rodríguez, Océane Boulais, Aruna Sankaranarayanan, Aaron Piña, Yarin Gal, Chedy Raïssi, Alexander Lavin, Dava Newman

Figure 1 for Physically-Consistent Generative Adversarial Networks for Coastal Flood Visualization
Figure 2 for Physically-Consistent Generative Adversarial Networks for Coastal Flood Visualization
Figure 3 for Physically-Consistent Generative Adversarial Networks for Coastal Flood Visualization
Figure 4 for Physically-Consistent Generative Adversarial Networks for Coastal Flood Visualization
Viaarxiv icon

Outcome-Driven Reinforcement Learning via Variational Inference

Add code
Bookmark button
Alert button
Apr 20, 2021
Tim G. J. Rudner, Vitchyr H. Pong, Rowan McAllister, Yarin Gal, Sergey Levine

Figure 1 for Outcome-Driven Reinforcement Learning via Variational Inference
Figure 2 for Outcome-Driven Reinforcement Learning via Variational Inference
Figure 3 for Outcome-Driven Reinforcement Learning via Variational Inference
Figure 4 for Outcome-Driven Reinforcement Learning via Variational Inference
Viaarxiv icon

Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties

Add code
Bookmark button
Alert button
Mar 16, 2021
Lisa Schut, Oscar Key, Rory McGrath, Luca Costabello, Bogdan Sacaleanu, Medb Corcoran, Yarin Gal

Figure 1 for Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties
Figure 2 for Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties
Figure 3 for Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties
Figure 4 for Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties
Viaarxiv icon

Robustness to Pruning Predicts Generalization in Deep Neural Networks

Add code
Bookmark button
Alert button
Mar 10, 2021
Lorenz Kuhn, Clare Lyle, Aidan N. Gomez, Jonas Rothfuss, Yarin Gal

Figure 1 for Robustness to Pruning Predicts Generalization in Deep Neural Networks
Figure 2 for Robustness to Pruning Predicts Generalization in Deep Neural Networks
Figure 3 for Robustness to Pruning Predicts Generalization in Deep Neural Networks
Figure 4 for Robustness to Pruning Predicts Generalization in Deep Neural Networks
Viaarxiv icon

Active Testing: Sample-Efficient Model Evaluation

Add code
Bookmark button
Alert button
Mar 09, 2021
Jannik Kossen, Sebastian Farquhar, Yarin Gal, Tom Rainforth

Figure 1 for Active Testing: Sample-Efficient Model Evaluation
Figure 2 for Active Testing: Sample-Efficient Model Evaluation
Figure 3 for Active Testing: Sample-Efficient Model Evaluation
Figure 4 for Active Testing: Sample-Efficient Model Evaluation
Viaarxiv icon

Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding

Add code
Bookmark button
Alert button
Mar 08, 2021
Andrew Jesson, Sören Mindermann, Yarin Gal, Uri Shalit

Figure 1 for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding
Figure 2 for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding
Figure 3 for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding
Figure 4 for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding
Viaarxiv icon

PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning

Add code
Bookmark button
Alert button
Feb 24, 2021
Angelos Filos, Clare Lyle, Yarin Gal, Sergey Levine, Natasha Jaques, Gregory Farquhar

Figure 1 for PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
Figure 2 for PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
Figure 3 for PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
Figure 4 for PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
Viaarxiv icon

Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty

Add code
Bookmark button
Alert button
Feb 23, 2021
Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal

Figure 1 for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty
Figure 2 for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty
Figure 3 for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty
Figure 4 for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty
Viaarxiv icon