Picture for Ekin D. Cubuk

Ekin D. Cubuk

Generative Hierarchical Materials Search

Add code
Sep 10, 2024
Viaarxiv icon

G-Augment: Searching For The Meta-Structure Of Data Augmentation Policies For ASR

Add code
Oct 19, 2022
Figure 1 for G-Augment: Searching For The Meta-Structure Of Data Augmentation Policies For ASR
Figure 2 for G-Augment: Searching For The Meta-Structure Of Data Augmentation Policies For ASR
Figure 3 for G-Augment: Searching For The Meta-Structure Of Data Augmentation Policies For ASR
Figure 4 for G-Augment: Searching For The Meta-Structure Of Data Augmentation Policies For ASR
Viaarxiv icon

On the surprising tradeoff between ImageNet accuracy and perceptual similarity

Add code
Mar 09, 2022
Figure 1 for On the surprising tradeoff between ImageNet accuracy and perceptual similarity
Figure 2 for On the surprising tradeoff between ImageNet accuracy and perceptual similarity
Figure 3 for On the surprising tradeoff between ImageNet accuracy and perceptual similarity
Figure 4 for On the surprising tradeoff between ImageNet accuracy and perceptual similarity
Viaarxiv icon

No One Representation to Rule Them All: Overlapping Features of Training Methods

Add code
Oct 26, 2021
Figure 1 for No One Representation to Rule Them All: Overlapping Features of Training Methods
Figure 2 for No One Representation to Rule Them All: Overlapping Features of Training Methods
Figure 3 for No One Representation to Rule Them All: Overlapping Features of Training Methods
Figure 4 for No One Representation to Rule Them All: Overlapping Features of Training Methods
Viaarxiv icon

Multi-Task Self-Training for Learning General Representations

Add code
Aug 25, 2021
Figure 1 for Multi-Task Self-Training for Learning General Representations
Figure 2 for Multi-Task Self-Training for Learning General Representations
Figure 3 for Multi-Task Self-Training for Learning General Representations
Figure 4 for Multi-Task Self-Training for Learning General Representations
Viaarxiv icon

Revisiting ResNets: Improved Training and Scaling Strategies

Add code
Mar 13, 2021
Figure 1 for Revisiting ResNets: Improved Training and Scaling Strategies
Figure 2 for Revisiting ResNets: Improved Training and Scaling Strategies
Figure 3 for Revisiting ResNets: Improved Training and Scaling Strategies
Figure 4 for Revisiting ResNets: Improved Training and Scaling Strategies
Viaarxiv icon

Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation

Add code
Dec 13, 2020
Figure 1 for Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Figure 2 for Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Figure 3 for Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Figure 4 for Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Viaarxiv icon

Crystal Structure Search with Random Relaxations Using Graph Networks

Add code
Dec 08, 2020
Figure 1 for Crystal Structure Search with Random Relaxations Using Graph Networks
Figure 2 for Crystal Structure Search with Random Relaxations Using Graph Networks
Figure 3 for Crystal Structure Search with Random Relaxations Using Graph Networks
Figure 4 for Crystal Structure Search with Random Relaxations Using Graph Networks
Viaarxiv icon

Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics

Add code
Sep 17, 2020
Figure 1 for Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics
Figure 2 for Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics
Figure 3 for Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics
Figure 4 for Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics
Viaarxiv icon

Rethinking Pre-training and Self-training

Add code
Jun 11, 2020
Figure 1 for Rethinking Pre-training and Self-training
Figure 2 for Rethinking Pre-training and Self-training
Figure 3 for Rethinking Pre-training and Self-training
Figure 4 for Rethinking Pre-training and Self-training
Viaarxiv icon