Picture for Samy Bengio

Samy Bengio

Google Research

Training cascaded networks for speeded decisions using a temporal-difference loss

Add code
Feb 19, 2021
Figure 1 for Training cascaded networks for speeded decisions using a temporal-difference loss
Figure 2 for Training cascaded networks for speeded decisions using a temporal-difference loss
Figure 3 for Training cascaded networks for speeded decisions using a temporal-difference loss
Figure 4 for Training cascaded networks for speeded decisions using a temporal-difference loss
Viaarxiv icon

NeurIPS 2020 Competition: Predicting Generalization in Deep Learning

Add code
Dec 14, 2020
Figure 1 for NeurIPS 2020 Competition: Predicting Generalization in Deep Learning
Viaarxiv icon

Data Augmentation via Structured Adversarial Perturbations

Add code
Nov 05, 2020
Figure 1 for Data Augmentation via Structured Adversarial Perturbations
Figure 2 for Data Augmentation via Structured Adversarial Perturbations
Figure 3 for Data Augmentation via Structured Adversarial Perturbations
Figure 4 for Data Augmentation via Structured Adversarial Perturbations
Viaarxiv icon

Characterising Bias in Compressed Models

Add code
Oct 06, 2020
Figure 1 for Characterising Bias in Compressed Models
Figure 2 for Characterising Bias in Compressed Models
Figure 3 for Characterising Bias in Compressed Models
Figure 4 for Characterising Bias in Compressed Models
Viaarxiv icon

Auto Completion of User Interface Layout Design Using Transformer-Based Tree Decoders

Add code
Jan 14, 2020
Figure 1 for Auto Completion of User Interface Layout Design Using Transformer-Based Tree Decoders
Figure 2 for Auto Completion of User Interface Layout Design Using Transformer-Based Tree Decoders
Figure 3 for Auto Completion of User Interface Layout Design Using Transformer-Based Tree Decoders
Figure 4 for Auto Completion of User Interface Layout Design Using Transformer-Based Tree Decoders
Viaarxiv icon

Fantastic Generalization Measures and Where to Find Them

Add code
Dec 04, 2019
Figure 1 for Fantastic Generalization Measures and Where to Find Them
Figure 2 for Fantastic Generalization Measures and Where to Find Them
Figure 3 for Fantastic Generalization Measures and Where to Find Them
Figure 4 for Fantastic Generalization Measures and Where to Find Them
Viaarxiv icon

Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML

Add code
Sep 19, 2019
Figure 1 for Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Figure 2 for Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Figure 3 for Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Figure 4 for Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Viaarxiv icon

Efficient Exploration with Self-Imitation Learning via Trajectory-Conditioned Policy

Add code
Jul 24, 2019
Figure 1 for Efficient Exploration with Self-Imitation Learning via Trajectory-Conditioned Policy
Figure 2 for Efficient Exploration with Self-Imitation Learning via Trajectory-Conditioned Policy
Figure 3 for Efficient Exploration with Self-Imitation Learning via Trajectory-Conditioned Policy
Figure 4 for Efficient Exploration with Self-Imitation Learning via Trajectory-Conditioned Policy
Viaarxiv icon

Parallel Scheduled Sampling

Add code
Jun 11, 2019
Figure 1 for Parallel Scheduled Sampling
Figure 2 for Parallel Scheduled Sampling
Figure 3 for Parallel Scheduled Sampling
Figure 4 for Parallel Scheduled Sampling
Viaarxiv icon

A Closed-Form Learned Pooling for Deep Classification Networks

Add code
Jun 10, 2019
Figure 1 for A Closed-Form Learned Pooling for Deep Classification Networks
Figure 2 for A Closed-Form Learned Pooling for Deep Classification Networks
Figure 3 for A Closed-Form Learned Pooling for Deep Classification Networks
Figure 4 for A Closed-Form Learned Pooling for Deep Classification Networks
Viaarxiv icon