Picture for Samira Abnar

Samira Abnar

Adaptivity and Modularity for Efficient Generalization Over Task Complexity

Add code
Oct 13, 2023
Figure 1 for Adaptivity and Modularity for Efficient Generalization Over Task Complexity
Figure 2 for Adaptivity and Modularity for Efficient Generalization Over Task Complexity
Figure 3 for Adaptivity and Modularity for Efficient Generalization Over Task Complexity
Figure 4 for Adaptivity and Modularity for Efficient Generalization Over Task Complexity
Viaarxiv icon

Diffusion Probabilistic Fields

Add code
Mar 01, 2023
Figure 1 for Diffusion Probabilistic Fields
Figure 2 for Diffusion Probabilistic Fields
Figure 3 for Diffusion Probabilistic Fields
Figure 4 for Diffusion Probabilistic Fields
Viaarxiv icon

GAUDI: A Neural Architect for Immersive 3D Scene Generation

Add code
Jul 27, 2022
Figure 1 for GAUDI: A Neural Architect for Immersive 3D Scene Generation
Figure 2 for GAUDI: A Neural Architect for Immersive 3D Scene Generation
Figure 3 for GAUDI: A Neural Architect for Immersive 3D Scene Generation
Figure 4 for GAUDI: A Neural Architect for Immersive 3D Scene Generation
Viaarxiv icon

Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?

Add code
Jul 21, 2022
Figure 1 for Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Figure 2 for Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Figure 3 for Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Figure 4 for Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Viaarxiv icon

Exploring the Limits of Large Scale Pre-training

Add code
Oct 05, 2021
Figure 1 for Exploring the Limits of Large Scale Pre-training
Figure 2 for Exploring the Limits of Large Scale Pre-training
Figure 3 for Exploring the Limits of Large Scale Pre-training
Figure 4 for Exploring the Limits of Large Scale Pre-training
Viaarxiv icon

Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers

Add code
Sep 22, 2021
Figure 1 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 2 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 3 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 4 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Viaarxiv icon

Gradual Domain Adaptation in the Wild:When Intermediate Distributions are Absent

Add code
Jun 10, 2021
Figure 1 for Gradual Domain Adaptation in the Wild:When Intermediate Distributions are Absent
Figure 2 for Gradual Domain Adaptation in the Wild:When Intermediate Distributions are Absent
Figure 3 for Gradual Domain Adaptation in the Wild:When Intermediate Distributions are Absent
Figure 4 for Gradual Domain Adaptation in the Wild:When Intermediate Distributions are Absent
Viaarxiv icon

Long Range Arena: A Benchmark for Efficient Transformers

Add code
Nov 08, 2020
Figure 1 for Long Range Arena: A Benchmark for Efficient Transformers
Figure 2 for Long Range Arena: A Benchmark for Efficient Transformers
Figure 3 for Long Range Arena: A Benchmark for Efficient Transformers
Figure 4 for Long Range Arena: A Benchmark for Efficient Transformers
Viaarxiv icon

Transferring Inductive Biases through Knowledge Distillation

Add code
Jun 02, 2020
Figure 1 for Transferring Inductive Biases through Knowledge Distillation
Figure 2 for Transferring Inductive Biases through Knowledge Distillation
Figure 3 for Transferring Inductive Biases through Knowledge Distillation
Figure 4 for Transferring Inductive Biases through Knowledge Distillation
Viaarxiv icon

Quantifying Attention Flow in Transformers

Add code
May 31, 2020
Figure 1 for Quantifying Attention Flow in Transformers
Figure 2 for Quantifying Attention Flow in Transformers
Figure 3 for Quantifying Attention Flow in Transformers
Figure 4 for Quantifying Attention Flow in Transformers
Viaarxiv icon