Picture for Roy Schwartz

Roy Schwartz

Transformers are Multi-State RNNs

Add code
Jan 11, 2024
Figure 1 for Transformers are Multi-State RNNs
Figure 2 for Transformers are Multi-State RNNs
Figure 3 for Transformers are Multi-State RNNs
Figure 4 for Transformers are Multi-State RNNs
Viaarxiv icon

Read, Look or Listen? What's Needed for Solving a Multimodal Dataset

Add code
Jul 06, 2023
Figure 1 for Read, Look or Listen? What's Needed for Solving a Multimodal Dataset
Figure 2 for Read, Look or Listen? What's Needed for Solving a Multimodal Dataset
Figure 3 for Read, Look or Listen? What's Needed for Solving a Multimodal Dataset
Figure 4 for Read, Look or Listen? What's Needed for Solving a Multimodal Dataset
Viaarxiv icon

Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research

Add code
Jun 29, 2023
Figure 1 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 2 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 3 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 4 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Viaarxiv icon

Morphosyntactic probing of multilingual BERT models

Add code
Jun 09, 2023
Figure 1 for Morphosyntactic probing of multilingual BERT models
Figure 2 for Morphosyntactic probing of multilingual BERT models
Figure 3 for Morphosyntactic probing of multilingual BERT models
Figure 4 for Morphosyntactic probing of multilingual BERT models
Viaarxiv icon

Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings

Add code
Jun 04, 2023
Viaarxiv icon

Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases

Add code
May 30, 2023
Figure 1 for Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases
Figure 2 for Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases
Figure 3 for Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases
Figure 4 for Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases
Viaarxiv icon

Textually Pretrained Speech Language Models

Add code
May 22, 2023
Figure 1 for Textually Pretrained Speech Language Models
Figure 2 for Textually Pretrained Speech Language Models
Figure 3 for Textually Pretrained Speech Language Models
Figure 4 for Textually Pretrained Speech Language Models
Viaarxiv icon

Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images

Add code
Mar 14, 2023
Figure 1 for Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images
Figure 2 for Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images
Figure 3 for Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images
Figure 4 for Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images
Viaarxiv icon

VASR: Visual Analogies of Situation Recognition

Add code
Dec 08, 2022
Figure 1 for VASR: Visual Analogies of Situation Recognition
Figure 2 for VASR: Visual Analogies of Situation Recognition
Figure 3 for VASR: Visual Analogies of Situation Recognition
Figure 4 for VASR: Visual Analogies of Situation Recognition
Viaarxiv icon

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

Add code
Nov 07, 2022
Figure 1 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 2 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 3 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 4 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Viaarxiv icon