Picture for Eric Wallace

Eric Wallace

Tony

Imitation Attacks and Defenses for Black-box Machine Translation Systems

Add code
Apr 30, 2020
Figure 1 for Imitation Attacks and Defenses for Black-box Machine Translation Systems
Figure 2 for Imitation Attacks and Defenses for Black-box Machine Translation Systems
Figure 3 for Imitation Attacks and Defenses for Black-box Machine Translation Systems
Figure 4 for Imitation Attacks and Defenses for Black-box Machine Translation Systems
Viaarxiv icon

Pretrained Transformers Improve Out-of-Distribution Robustness

Add code
Apr 16, 2020
Figure 1 for Pretrained Transformers Improve Out-of-Distribution Robustness
Figure 2 for Pretrained Transformers Improve Out-of-Distribution Robustness
Figure 3 for Pretrained Transformers Improve Out-of-Distribution Robustness
Figure 4 for Pretrained Transformers Improve Out-of-Distribution Robustness
Viaarxiv icon

Evaluating NLP Models via Contrast Sets

Add code
Apr 06, 2020
Figure 1 for Evaluating NLP Models via Contrast Sets
Figure 2 for Evaluating NLP Models via Contrast Sets
Figure 3 for Evaluating NLP Models via Contrast Sets
Figure 4 for Evaluating NLP Models via Contrast Sets
Viaarxiv icon

Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers

Add code
Feb 26, 2020
Figure 1 for Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Figure 2 for Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Figure 3 for Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Figure 4 for Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Viaarxiv icon

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

Add code
Sep 19, 2019
Figure 1 for AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
Figure 2 for AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
Figure 3 for AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
Viaarxiv icon

Do NLP Models Know Numbers? Probing Numeracy in Embeddings

Add code
Sep 18, 2019
Figure 1 for Do NLP Models Know Numbers? Probing Numeracy in Embeddings
Figure 2 for Do NLP Models Know Numbers? Probing Numeracy in Embeddings
Figure 3 for Do NLP Models Know Numbers? Probing Numeracy in Embeddings
Figure 4 for Do NLP Models Know Numbers? Probing Numeracy in Embeddings
Viaarxiv icon

Universal Adversarial Triggers for Attacking and Analyzing NLP

Add code
Aug 29, 2019
Figure 1 for Universal Adversarial Triggers for Attacking and Analyzing NLP
Figure 2 for Universal Adversarial Triggers for Attacking and Analyzing NLP
Figure 3 for Universal Adversarial Triggers for Attacking and Analyzing NLP
Figure 4 for Universal Adversarial Triggers for Attacking and Analyzing NLP
Viaarxiv icon

Compositional Questions Do Not Necessitate Multi-hop Reasoning

Add code
Jun 07, 2019
Figure 1 for Compositional Questions Do Not Necessitate Multi-hop Reasoning
Figure 2 for Compositional Questions Do Not Necessitate Multi-hop Reasoning
Figure 3 for Compositional Questions Do Not Necessitate Multi-hop Reasoning
Figure 4 for Compositional Questions Do Not Necessitate Multi-hop Reasoning
Viaarxiv icon

Misleading Failures of Partial-input Baselines

Add code
May 14, 2019
Figure 1 for Misleading Failures of Partial-input Baselines
Figure 2 for Misleading Failures of Partial-input Baselines
Figure 3 for Misleading Failures of Partial-input Baselines
Viaarxiv icon

Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation

Add code
Feb 01, 2019
Figure 1 for Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
Figure 2 for Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
Figure 3 for Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
Figure 4 for Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
Viaarxiv icon