Picture for Mohammad Norouzi

Mohammad Norouzi

SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network

Add code
Apr 27, 2021
Figure 1 for SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network
Figure 2 for SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network
Viaarxiv icon

Image Super-Resolution via Iterative Refinement

Add code
Apr 15, 2021
Figure 1 for Image Super-Resolution via Iterative Refinement
Figure 2 for Image Super-Resolution via Iterative Refinement
Figure 3 for Image Super-Resolution via Iterative Refinement
Figure 4 for Image Super-Resolution via Iterative Refinement
Viaarxiv icon

Benchmarks for Deep Off-Policy Evaluation

Add code
Mar 30, 2021
Figure 1 for Benchmarks for Deep Off-Policy Evaluation
Figure 2 for Benchmarks for Deep Off-Policy Evaluation
Figure 3 for Benchmarks for Deep Off-Policy Evaluation
Figure 4 for Benchmarks for Deep Off-Policy Evaluation
Viaarxiv icon

Big Self-Supervised Models Advance Medical Image Classification

Add code
Jan 13, 2021
Figure 1 for Big Self-Supervised Models Advance Medical Image Classification
Figure 2 for Big Self-Supervised Models Advance Medical Image Classification
Figure 3 for Big Self-Supervised Models Advance Medical Image Classification
Figure 4 for Big Self-Supervised Models Advance Medical Image Classification
Viaarxiv icon

What's in a Loss Function for Image Classification?

Add code
Oct 30, 2020
Figure 1 for What's in a Loss Function for Image Classification?
Figure 2 for What's in a Loss Function for Image Classification?
Figure 3 for What's in a Loss Function for Image Classification?
Figure 4 for What's in a Loss Function for Image Classification?
Viaarxiv icon

No MCMC for me: Amortized sampling for fast and stable training of energy-based models

Add code
Oct 14, 2020
Figure 1 for No MCMC for me: Amortized sampling for fast and stable training of energy-based models
Figure 2 for No MCMC for me: Amortized sampling for fast and stable training of energy-based models
Figure 3 for No MCMC for me: Amortized sampling for fast and stable training of energy-based models
Figure 4 for No MCMC for me: Amortized sampling for fast and stable training of energy-based models
Viaarxiv icon

Mastering Atari with Discrete World Models

Add code
Oct 05, 2020
Figure 1 for Mastering Atari with Discrete World Models
Figure 2 for Mastering Atari with Discrete World Models
Figure 3 for Mastering Atari with Discrete World Models
Figure 4 for Mastering Atari with Discrete World Models
Viaarxiv icon

WaveGrad: Estimating Gradients for Waveform Generation

Add code
Sep 02, 2020
Figure 1 for WaveGrad: Estimating Gradients for Waveform Generation
Figure 2 for WaveGrad: Estimating Gradients for Waveform Generation
Figure 3 for WaveGrad: Estimating Gradients for Waveform Generation
Figure 4 for WaveGrad: Estimating Gradients for Waveform Generation
Viaarxiv icon

RL Unplugged: Benchmarks for Offline Reinforcement Learning

Add code
Jul 02, 2020
Figure 1 for RL Unplugged: Benchmarks for Offline Reinforcement Learning
Figure 2 for RL Unplugged: Benchmarks for Offline Reinforcement Learning
Figure 3 for RL Unplugged: Benchmarks for Offline Reinforcement Learning
Figure 4 for RL Unplugged: Benchmarks for Offline Reinforcement Learning
Viaarxiv icon

Big Self-Supervised Models are Strong Semi-Supervised Learners

Add code
Jun 17, 2020
Figure 1 for Big Self-Supervised Models are Strong Semi-Supervised Learners
Figure 2 for Big Self-Supervised Models are Strong Semi-Supervised Learners
Figure 3 for Big Self-Supervised Models are Strong Semi-Supervised Learners
Figure 4 for Big Self-Supervised Models are Strong Semi-Supervised Learners
Viaarxiv icon