Alert button
Picture for Tejas Srinivasan

Tejas Srinivasan

Alert button

University of Southern California

Selective "Selective Prediction": Reducing Unnecessary Abstention in Vision-Language Reasoning

Add code
Bookmark button
Alert button
Feb 23, 2024
Tejas Srinivasan, Jack Hessel, Tanmay Gupta, Bill Yuchen Lin, Yejin Choi, Jesse Thomason, Khyathi Raghavi Chandu

Viaarxiv icon

WinoViz: Probing Visual Properties of Objects Under Different States

Add code
Bookmark button
Alert button
Feb 21, 2024
Woojeong Jin, Tejas Srinivasan, Jesse Thomason, Xiang Ren

Viaarxiv icon

Exploring Strategies for Modeling Sign Language Phonology

Add code
Bookmark button
Alert button
Sep 30, 2023
Lee Kezar, Riley Carlin, Tejas Srinivasan, Zed Sehyr, Naomi Caselli, Jesse Thomason

Figure 1 for Exploring Strategies for Modeling Sign Language Phonology
Figure 2 for Exploring Strategies for Modeling Sign Language Phonology
Figure 3 for Exploring Strategies for Modeling Sign Language Phonology
Viaarxiv icon

I2I: Initializing Adapters with Improvised Knowledge

Add code
Bookmark button
Alert button
Apr 04, 2023
Tejas Srinivasan, Furong Jia, Mohammad Rostami, Jesse Thomason

Figure 1 for I2I: Initializing Adapters with Improvised Knowledge
Figure 2 for I2I: Initializing Adapters with Improvised Knowledge
Figure 3 for I2I: Initializing Adapters with Improvised Knowledge
Figure 4 for I2I: Initializing Adapters with Improvised Knowledge
Viaarxiv icon

Multimodal Speech Recognition for Language-Guided Embodied Agents

Add code
Bookmark button
Alert button
Feb 27, 2023
Allen Chang, Xiaoyuan Zhu, Aarav Monga, Seoho Ahn, Tejas Srinivasan, Jesse Thomason

Figure 1 for Multimodal Speech Recognition for Language-Guided Embodied Agents
Figure 2 for Multimodal Speech Recognition for Language-Guided Embodied Agents
Figure 3 for Multimodal Speech Recognition for Language-Guided Embodied Agents
Figure 4 for Multimodal Speech Recognition for Language-Guided Embodied Agents
Viaarxiv icon

VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations

Add code
Bookmark button
Alert button
Aug 18, 2022
Georgios Chochlakis, Tejas Srinivasan, Jesse Thomason, Shrikanth Narayanan

Figure 1 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 2 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 3 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Figure 4 for VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations
Viaarxiv icon

Curriculum Learning for Data-Efficient Vision-Language Alignment

Add code
Bookmark button
Alert button
Jul 29, 2022
Tejas Srinivasan, Xiang Ren, Jesse Thomason

Figure 1 for Curriculum Learning for Data-Efficient Vision-Language Alignment
Figure 2 for Curriculum Learning for Data-Efficient Vision-Language Alignment
Figure 3 for Curriculum Learning for Data-Efficient Vision-Language Alignment
Figure 4 for Curriculum Learning for Data-Efficient Vision-Language Alignment
Viaarxiv icon

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

Add code
Bookmark button
Alert button
Jun 18, 2022
Tejas Srinivasan, Ting-Yun Chang, Leticia Leonor Pinto Alva, Georgios Chochlakis, Mohammad Rostami, Jesse Thomason

Figure 1 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 2 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 3 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Figure 4 for CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Viaarxiv icon

Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models

Add code
Bookmark button
Alert button
Apr 18, 2021
Tejas Srinivasan, Yonatan Bisk

Figure 1 for Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Figure 2 for Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Figure 3 for Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Figure 4 for Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Viaarxiv icon

Reasoning Over History: Context Aware Visual Dialog

Add code
Bookmark button
Alert button
Nov 02, 2020
Muhammad A. Shah, Shikib Mehri, Tejas Srinivasan

Figure 1 for Reasoning Over History: Context Aware Visual Dialog
Figure 2 for Reasoning Over History: Context Aware Visual Dialog
Figure 3 for Reasoning Over History: Context Aware Visual Dialog
Figure 4 for Reasoning Over History: Context Aware Visual Dialog
Viaarxiv icon