Alert button
Picture for Roy Schwartz

Roy Schwartz

Alert button

Transformers are Multi-State RNNs

Jan 11, 2024
Matanel Oren, Michael Hassid, Yossi Adi, Roy Schwartz

Viaarxiv icon

Read, Look or Listen? What's Needed for Solving a Multimodal Dataset

Jul 06, 2023
Netta Madvil, Yonatan Bitton, Roy Schwartz

Figure 1 for Read, Look or Listen? What's Needed for Solving a Multimodal Dataset
Figure 2 for Read, Look or Listen? What's Needed for Solving a Multimodal Dataset
Figure 3 for Read, Look or Listen? What's Needed for Solving a Multimodal Dataset
Figure 4 for Read, Look or Listen? What's Needed for Solving a Multimodal Dataset
Viaarxiv icon

Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research

Jun 29, 2023
Ji-Ung Lee, Haritz Puerto, Betty van Aken, Yuki Arase, Jessica Zosa Forde, Leon Derczynski, Andreas Rücklé, Iryna Gurevych, Roy Schwartz, Emma Strubell, Jesse Dodge

Figure 1 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 2 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 3 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 4 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Viaarxiv icon

Morphosyntactic probing of multilingual BERT models

Jun 09, 2023
Judit Acs, Endre Hamerlik, Roy Schwartz, Noah A. Smith, Andras Kornai

Figure 1 for Morphosyntactic probing of multilingual BERT models
Figure 2 for Morphosyntactic probing of multilingual BERT models
Figure 3 for Morphosyntactic probing of multilingual BERT models
Figure 4 for Morphosyntactic probing of multilingual BERT models
Viaarxiv icon

Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings

Jun 04, 2023
Daniel Rotem, Michael Hassid, Jonathan Mamou, Roy Schwartz

Figure 1 for Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings
Figure 2 for Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings
Figure 3 for Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings
Figure 4 for Finding the SWEET Spot: Analysis and Improvement of Adaptive Inference in Low Resource Settings
Viaarxiv icon

Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases

May 30, 2023
Yuval Reif, Roy Schwartz

Viaarxiv icon

Textually Pretrained Speech Language Models

May 22, 2023
Michael Hassid, Tal Remez, Tu Anh Nguyen, Itai Gat, Alexis Conneau, Felix Kreuk, Jade Copet, Alexandre Defossez, Gabriel Synnaeve, Emmanuel Dupoux, Roy Schwartz, Yossi Adi

Figure 1 for Textually Pretrained Speech Language Models
Figure 2 for Textually Pretrained Speech Language Models
Figure 3 for Textually Pretrained Speech Language Models
Figure 4 for Textually Pretrained Speech Language Models
Viaarxiv icon

Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images

Mar 14, 2023
Nitzan Bitton-Guetta, Yonatan Bitton, Jack Hessel, Ludwig Schmidt, Yuval Elovici, Gabriel Stanovsky, Roy Schwartz

Figure 1 for Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images
Figure 2 for Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images
Figure 3 for Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images
Figure 4 for Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images
Viaarxiv icon

VASR: Visual Analogies of Situation Recognition

Dec 08, 2022
Yonatan Bitton, Ron Yosef, Eli Strugo, Dafna Shahaf, Roy Schwartz, Gabriel Stanovsky

Figure 1 for VASR: Visual Analogies of Situation Recognition
Figure 2 for VASR: Visual Analogies of Situation Recognition
Figure 3 for VASR: Visual Analogies of Situation Recognition
Figure 4 for VASR: Visual Analogies of Situation Recognition
Viaarxiv icon

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

Nov 07, 2022
Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz

Figure 1 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 2 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 3 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 4 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Viaarxiv icon