Alert button

"Text": models, code, and papers
Alert button

Understanding and Bridging the Modality Gap for Speech Translation

May 15, 2023
Qingkai Fang, Yang Feng

Figure 1 for Understanding and Bridging the Modality Gap for Speech Translation
Figure 2 for Understanding and Bridging the Modality Gap for Speech Translation
Figure 3 for Understanding and Bridging the Modality Gap for Speech Translation
Figure 4 for Understanding and Bridging the Modality Gap for Speech Translation
Viaarxiv icon

Transformers are Short Text Classifiers: A Study of Inductive Short Text Classifiers on Benchmarks and Real-world Datasets

Nov 30, 2022
Fabian Karl, Ansgar Scherp

Figure 1 for Transformers are Short Text Classifiers: A Study of Inductive Short Text Classifiers on Benchmarks and Real-world Datasets
Figure 2 for Transformers are Short Text Classifiers: A Study of Inductive Short Text Classifiers on Benchmarks and Real-world Datasets
Figure 3 for Transformers are Short Text Classifiers: A Study of Inductive Short Text Classifiers on Benchmarks and Real-world Datasets
Viaarxiv icon

Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics Interface of LMs Through Agentivity

May 29, 2023
Lindia Tjuatja, Emmy Liu, Lori Levin, Graham Neubig

Figure 1 for Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics Interface of LMs Through Agentivity
Figure 2 for Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics Interface of LMs Through Agentivity
Figure 3 for Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics Interface of LMs Through Agentivity
Figure 4 for Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics Interface of LMs Through Agentivity
Viaarxiv icon

Do Language Models Know When They're Hallucinating References?

May 29, 2023
Ayush Agrawal, Lester Mackey, Adam Tauman Kalai

Figure 1 for Do Language Models Know When They're Hallucinating References?
Figure 2 for Do Language Models Know When They're Hallucinating References?
Figure 3 for Do Language Models Know When They're Hallucinating References?
Figure 4 for Do Language Models Know When They're Hallucinating References?
Viaarxiv icon

Byte-Level Grammatical Error Correction Using Synthetic and Curated Corpora

May 29, 2023
Svanhvít Lilja Ingólfsdóttir, Pétur Orri Ragnarsson, Haukur Páll Jónsson, Haukur Barri Símonarson, Vilhjálmur Þorsteinsson, Vésteinn Snæbjarnarson

Figure 1 for Byte-Level Grammatical Error Correction Using Synthetic and Curated Corpora
Figure 2 for Byte-Level Grammatical Error Correction Using Synthetic and Curated Corpora
Figure 3 for Byte-Level Grammatical Error Correction Using Synthetic and Curated Corpora
Figure 4 for Byte-Level Grammatical Error Correction Using Synthetic and Curated Corpora
Viaarxiv icon

3D Model-based Zero-Shot Pose Estimation Pipeline

May 29, 2023
Jianqiu Chen, Mingshan Sun, Tianpeng Bao, Rui Zhao, Liwei Wu, Zhenyu He

Figure 1 for 3D Model-based Zero-Shot Pose Estimation Pipeline
Figure 2 for 3D Model-based Zero-Shot Pose Estimation Pipeline
Figure 3 for 3D Model-based Zero-Shot Pose Estimation Pipeline
Figure 4 for 3D Model-based Zero-Shot Pose Estimation Pipeline
Viaarxiv icon

Pruning Pre-trained Language Models with Principled Importance and Self-regularization

May 21, 2023
Siyu Ren, Kenny Q. Zhu

Figure 1 for Pruning Pre-trained Language Models with Principled Importance and Self-regularization
Figure 2 for Pruning Pre-trained Language Models with Principled Importance and Self-regularization
Figure 3 for Pruning Pre-trained Language Models with Principled Importance and Self-regularization
Figure 4 for Pruning Pre-trained Language Models with Principled Importance and Self-regularization
Viaarxiv icon

Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation

May 21, 2023
Christopher Burger, Lingwei Chen, Thai Le

Figure 1 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Figure 2 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Figure 3 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Figure 4 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Viaarxiv icon

ChatGPT Is More Likely to Be Perceived as Male Than Female

May 21, 2023
Jared Wong, Jin Kim

Viaarxiv icon

What's in a Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD Files

Apr 25, 2023
Peter Meltzer, Joseph G. Lambourne, Daniele Grandi

Figure 1 for What's in a Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD Files
Figure 2 for What's in a Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD Files
Figure 3 for What's in a Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD Files
Figure 4 for What's in a Name? Evaluating Assembly-Part Semantic Knowledge in Language Models through User-Provided Names in CAD Files
Viaarxiv icon