Alert button
Picture for Gabriel Stanovsky

Gabriel Stanovsky

Alert button

Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition

Mar 01, 2024
Ariel Goldstein, Gabriel Stanovsky

Figure 1 for Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition
Viaarxiv icon

Leveraging Collection-Wide Similarities for Unsupervised Document Structure Extraction

Feb 21, 2024
Gili Lior, Yoav Goldberg, Gabriel Stanovsky

Viaarxiv icon

K-QA: A Real-World Medical Q&A Benchmark

Jan 25, 2024
Itay Manes, Naama Ronn, David Cohen, Ran Ilan Ber, Zehavi Horowitz-Kugler, Gabriel Stanovsky

Viaarxiv icon

State of What Art? A Call for Multi-Prompt LLM Evaluation

Dec 31, 2023
Moran Mizrahi, Guy Kaplan, Dan Malkin, Rotem Dror, Dafna Shahaf, Gabriel Stanovsky

Viaarxiv icon

Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation

Sep 30, 2023
Bar Iluz, Tomasz Limisiewicz, Gabriel Stanovsky, David Mareček

Figure 1 for Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation
Figure 2 for Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation
Figure 3 for Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation
Figure 4 for Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation
Viaarxiv icon

Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias

Aug 01, 2023
Itay Itzhak, Gabriel Stanovsky, Nir Rosenfeld, Yonatan Belinkov

Figure 1 for Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias
Figure 2 for Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias
Figure 3 for Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias
Figure 4 for Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias
Viaarxiv icon

Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents

Jun 01, 2023
Catherine Chen, Zejiang Shen, Dan Klein, Gabriel Stanovsky, Doug Downey, Kyle Lo

Figure 1 for Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
Figure 2 for Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
Figure 3 for Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
Figure 4 for Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
Viaarxiv icon

Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution

May 24, 2023
Gili Lior, Gabriel Stanovsky

Figure 1 for Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution
Figure 2 for Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution
Figure 3 for Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution
Figure 4 for Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution
Viaarxiv icon

Schema-Driven Information Extraction from Heterogeneous Tables

May 23, 2023
Fan Bai, Junmo Kang, Gabriel Stanovsky, Dayne Freitag, Alan Ritter

Figure 1 for Schema-Driven Information Extraction from Heterogeneous Tables
Figure 2 for Schema-Driven Information Extraction from Heterogeneous Tables
Figure 3 for Schema-Driven Information Extraction from Heterogeneous Tables
Figure 4 for Schema-Driven Information Extraction from Heterogeneous Tables
Viaarxiv icon