Picture for Aude Oliva

Aude Oliva

MIT

ConMe: Rethinking Evaluation of Compositional Reasoning for Modern VLMs

Add code
Jun 12, 2024
Viaarxiv icon

$\textit{Trans-LoRA}$: towards data-free Transferable Parameter Efficient Finetuning

Add code
May 27, 2024
Viaarxiv icon

Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models

Add code
Apr 08, 2024
Figure 1 for Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models
Figure 2 for Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models
Figure 3 for Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models
Figure 4 for Dense Training, Sparse Inference: Rethinking Training of Mixture-of-Experts Language Models
Viaarxiv icon

Learning Human Action Recognition Representations Without Real Humans

Add code
Nov 10, 2023
Viaarxiv icon

LangNav: Language as a Perceptual Representation for Navigation

Add code
Oct 11, 2023
Viaarxiv icon

Going Beyond Nouns With Vision & Language Models Using Synthetic Data

Add code
Mar 30, 2023
Viaarxiv icon

Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines

Add code
Jun 02, 2022
Figure 1 for Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines
Figure 2 for Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines
Figure 3 for Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines
Figure 4 for Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines
Viaarxiv icon

Ego4D: Around the World in 3,000 Hours of Egocentric Video

Add code
Oct 13, 2021
Figure 1 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 2 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 3 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 4 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Viaarxiv icon

Dynamic Network Quantization for Efficient Video Inference

Add code
Aug 23, 2021
Figure 1 for Dynamic Network Quantization for Efficient Video Inference
Figure 2 for Dynamic Network Quantization for Efficient Video Inference
Figure 3 for Dynamic Network Quantization for Efficient Video Inference
Figure 4 for Dynamic Network Quantization for Efficient Video Inference
Viaarxiv icon

IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers

Add code
Jun 23, 2021
Figure 1 for IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers
Figure 2 for IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers
Figure 3 for IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers
Figure 4 for IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision Transformers
Viaarxiv icon