Alert button
Picture for Jesse Thomason

Jesse Thomason

Alert button

The Sem-Lex Benchmark: Modeling ASL Signs and Their Phonemes

Add code
Bookmark button
Alert button
Sep 30, 2023
Lee Kezar, Elana Pontecorvo, Adele Daniels, Connor Baer, Ruth Ferster, Lauren Berger, Jesse Thomason, Zed Sevcikova Sehyr, Naomi Caselli

Figure 1 for The Sem-Lex Benchmark: Modeling ASL Signs and Their Phonemes
Figure 2 for The Sem-Lex Benchmark: Modeling ASL Signs and Their Phonemes
Figure 3 for The Sem-Lex Benchmark: Modeling ASL Signs and Their Phonemes
Figure 4 for The Sem-Lex Benchmark: Modeling ASL Signs and Their Phonemes
Viaarxiv icon

Exploring Strategies for Modeling Sign Language Phonology

Add code
Bookmark button
Alert button
Sep 30, 2023
Lee Kezar, Riley Carlin, Tejas Srinivasan, Zed Sehyr, Naomi Caselli, Jesse Thomason

Figure 1 for Exploring Strategies for Modeling Sign Language Phonology
Figure 2 for Exploring Strategies for Modeling Sign Language Phonology
Figure 3 for Exploring Strategies for Modeling Sign Language Phonology
Viaarxiv icon

Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering

Add code
Bookmark button
Alert button
May 24, 2023
Wang Zhu, Jesse Thomason, Robin Jia

Figure 1 for Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering
Figure 2 for Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering
Figure 3 for Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering
Figure 4 for Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering
Viaarxiv icon

I2I: Initializing Adapters with Improvised Knowledge

Add code
Bookmark button
Alert button
Apr 04, 2023
Tejas Srinivasan, Furong Jia, Mohammad Rostami, Jesse Thomason

Figure 1 for I2I: Initializing Adapters with Improvised Knowledge
Figure 2 for I2I: Initializing Adapters with Improvised Knowledge
Figure 3 for I2I: Initializing Adapters with Improvised Knowledge
Figure 4 for I2I: Initializing Adapters with Improvised Knowledge
Viaarxiv icon

Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation

Add code
Bookmark button
Alert button
Mar 25, 2023
Yuliang Cai, Jesse Thomason, Mohammad Rostami

Figure 1 for Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation
Figure 2 for Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation
Figure 3 for Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation
Figure 4 for Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation
Viaarxiv icon

Multimodal Speech Recognition for Language-Guided Embodied Agents

Add code
Bookmark button
Alert button
Feb 27, 2023
Allen Chang, Xiaoyuan Zhu, Aarav Monga, Seoho Ahn, Tejas Srinivasan, Jesse Thomason

Figure 1 for Multimodal Speech Recognition for Language-Guided Embodied Agents
Figure 2 for Multimodal Speech Recognition for Language-Guided Embodied Agents
Figure 3 for Multimodal Speech Recognition for Language-Guided Embodied Agents
Figure 4 for Multimodal Speech Recognition for Language-Guided Embodied Agents
Viaarxiv icon

Improving Sign Recognition with Phonology

Add code
Bookmark button
Alert button
Feb 11, 2023
Lee Kezar, Jesse Thomason, Zed Sevcikova Sehyr

Figure 1 for Improving Sign Recognition with Phonology
Figure 2 for Improving Sign Recognition with Phonology
Figure 3 for Improving Sign Recognition with Phonology
Figure 4 for Improving Sign Recognition with Phonology
Viaarxiv icon

RREx-BoT: Remote Referring Expressions with a Bag of Tricks

Add code
Bookmark button
Alert button
Jan 30, 2023
Gunnar A. Sigurdsson, Jesse Thomason, Gaurav S. Sukhatme, Robinson Piramuthu

Figure 1 for RREx-BoT: Remote Referring Expressions with a Bag of Tricks
Figure 2 for RREx-BoT: Remote Referring Expressions with a Bag of Tricks
Figure 3 for RREx-BoT: Remote Referring Expressions with a Bag of Tricks
Figure 4 for RREx-BoT: Remote Referring Expressions with a Bag of Tricks
Viaarxiv icon

CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation

Add code
Bookmark button
Alert button
Nov 30, 2022
Vishnu Sashank Dorbala, Gunnar Sigurdsson, Robinson Piramuthu, Jesse Thomason, Gaurav S. Sukhatme

Figure 1 for CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation
Figure 2 for CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation
Figure 3 for CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation
Figure 4 for CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation
Viaarxiv icon

Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems

Add code
Bookmark button
Alert button
Oct 26, 2022
Wang Zhu, Jesse Thomason, Robin Jia

Figure 1 for Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems
Figure 2 for Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems
Figure 3 for Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems
Figure 4 for Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems
Viaarxiv icon