Picture for Sameer Singh

Sameer Singh

Learning with Instance Bundles for Reading Comprehension

Add code
Apr 18, 2021
Figure 1 for Learning with Instance Bundles for Reading Comprehension
Figure 2 for Learning with Instance Bundles for Reading Comprehension
Figure 3 for Learning with Instance Bundles for Reading Comprehension
Figure 4 for Learning with Instance Bundles for Reading Comprehension
Viaarxiv icon

Competency Problems: On Finding and Removing Artifacts in Language Data

Add code
Apr 17, 2021
Figure 1 for Competency Problems: On Finding and Removing Artifacts in Language Data
Figure 2 for Competency Problems: On Finding and Removing Artifacts in Language Data
Figure 3 for Competency Problems: On Finding and Removing Artifacts in Language Data
Figure 4 for Competency Problems: On Finding and Removing Artifacts in Language Data
Viaarxiv icon

An Empirical Comparison of Instance Attribution Methods for NLP

Add code
Apr 09, 2021
Figure 1 for An Empirical Comparison of Instance Attribution Methods for NLP
Figure 2 for An Empirical Comparison of Instance Attribution Methods for NLP
Figure 3 for An Empirical Comparison of Instance Attribution Methods for NLP
Figure 4 for An Empirical Comparison of Instance Attribution Methods for NLP
Viaarxiv icon

Paired Examples as Indirect Supervision in Latent Decision Models

Add code
Apr 05, 2021
Figure 1 for Paired Examples as Indirect Supervision in Latent Decision Models
Figure 2 for Paired Examples as Indirect Supervision in Latent Decision Models
Figure 3 for Paired Examples as Indirect Supervision in Latent Decision Models
Figure 4 for Paired Examples as Indirect Supervision in Latent Decision Models
Viaarxiv icon

Calibrate Before Use: Improving Few-Shot Performance of Language Models

Add code
Feb 19, 2021
Figure 1 for Calibrate Before Use: Improving Few-Shot Performance of Language Models
Figure 2 for Calibrate Before Use: Improving Few-Shot Performance of Language Models
Figure 3 for Calibrate Before Use: Improving Few-Shot Performance of Language Models
Figure 4 for Calibrate Before Use: Improving Few-Shot Performance of Language Models
Viaarxiv icon

AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts

Add code
Nov 07, 2020
Figure 1 for AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Figure 2 for AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Figure 3 for AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Figure 4 for AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts
Viaarxiv icon

Customizing Triggers with Concealed Data Poisoning

Add code
Oct 23, 2020
Figure 1 for Customizing Triggers with Concealed Data Poisoning
Figure 2 for Customizing Triggers with Concealed Data Poisoning
Figure 3 for Customizing Triggers with Concealed Data Poisoning
Figure 4 for Customizing Triggers with Concealed Data Poisoning
Viaarxiv icon

MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics

Add code
Oct 15, 2020
Figure 1 for MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics
Figure 2 for MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics
Figure 3 for MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics
Figure 4 for MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics
Viaarxiv icon

MedICaT: A Dataset of Medical Images, Captions, and Textual References

Add code
Oct 12, 2020
Figure 1 for MedICaT: A Dataset of Medical Images, Captions, and Textual References
Figure 2 for MedICaT: A Dataset of Medical Images, Captions, and Textual References
Figure 3 for MedICaT: A Dataset of Medical Images, Captions, and Textual References
Figure 4 for MedICaT: A Dataset of Medical Images, Captions, and Textual References
Viaarxiv icon

Gradient-based Analysis of NLP Models is Manipulable

Add code
Oct 12, 2020
Figure 1 for Gradient-based Analysis of NLP Models is Manipulable
Figure 2 for Gradient-based Analysis of NLP Models is Manipulable
Figure 3 for Gradient-based Analysis of NLP Models is Manipulable
Figure 4 for Gradient-based Analysis of NLP Models is Manipulable
Viaarxiv icon