Picture for Tejas Srinivasan

Tejas Srinivasan

University of Southern California

Compare without Despair: Reliable Preference Evaluation with Generation Separability

Add code
Jul 02, 2024
Viaarxiv icon

Selective "Selective Prediction": Reducing Unnecessary Abstention in Vision-Language Reasoning

Add code
Feb 23, 2024
Viaarxiv icon

WinoViz: Probing Visual Properties of Objects Under Different States

Add code
Feb 21, 2024
Viaarxiv icon

Exploring Strategies for Modeling Sign Language Phonology

Add code
Sep 30, 2023
Viaarxiv icon

I2I: Initializing Adapters with Improvised Knowledge

Add code
Apr 04, 2023
Viaarxiv icon

Multimodal Speech Recognition for Language-Guided Embodied Agents

Add code
Feb 27, 2023
Viaarxiv icon

VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations

Add code
Aug 18, 2022
Figure 1 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 2 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 3 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 4 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Viaarxiv icon

Curriculum Learning for Data-Efficient Vision-Language Alignment

Add code
Jul 29, 2022
Figure 1 for Curriculum Learning for Data-Efficient Vision-Language Alignment
Figure 2 for Curriculum Learning for Data-Efficient Vision-Language Alignment
Figure 3 for Curriculum Learning for Data-Efficient Vision-Language Alignment
Figure 4 for Curriculum Learning for Data-Efficient Vision-Language Alignment
Viaarxiv icon

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

Add code
Jun 18, 2022
Figure 1 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 2 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 3 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 4 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Viaarxiv icon

Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models

Add code
Apr 18, 2021
Figure 1 for Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Figure 2 for Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Figure 3 for Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Figure 4 for Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Viaarxiv icon