Picture for Boxin Wang

Boxin Wang

Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study

Add code
Apr 13, 2023
Figure 1 for Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study
Figure 2 for Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study
Figure 3 for Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study
Figure 4 for Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study
Viaarxiv icon

FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data

Add code
Jul 21, 2022
Figure 1 for FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
Figure 2 for FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
Viaarxiv icon

SemAttack: Natural Textual Attacks via Different Semantic Spaces

Add code
May 16, 2022
Figure 1 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 2 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 3 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 4 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Viaarxiv icon

Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models

Add code
Feb 08, 2022
Figure 1 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Figure 2 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Figure 3 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Figure 4 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Viaarxiv icon

Certifying Out-of-Domain Generalization for Blackbox Functions

Add code
Feb 03, 2022
Figure 1 for Certifying Out-of-Domain Generalization for Blackbox Functions
Figure 2 for Certifying Out-of-Domain Generalization for Blackbox Functions
Figure 3 for Certifying Out-of-Domain Generalization for Blackbox Functions
Figure 4 for Certifying Out-of-Domain Generalization for Blackbox Functions
Viaarxiv icon

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

Add code
Nov 04, 2021
Figure 1 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 2 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 3 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 4 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Viaarxiv icon

Counterfactual Adversarial Learning with Representation Interpolation

Add code
Sep 10, 2021
Figure 1 for Counterfactual Adversarial Learning with Representation Interpolation
Figure 2 for Counterfactual Adversarial Learning with Representation Interpolation
Figure 3 for Counterfactual Adversarial Learning with Representation Interpolation
Figure 4 for Counterfactual Adversarial Learning with Representation Interpolation
Viaarxiv icon

Incorporating External POS Tagger for Punctuation Restoration

Add code
Jun 12, 2021
Figure 1 for Incorporating External POS Tagger for Punctuation Restoration
Figure 2 for Incorporating External POS Tagger for Punctuation Restoration
Figure 3 for Incorporating External POS Tagger for Punctuation Restoration
Viaarxiv icon

DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation

Add code
Mar 20, 2021
Figure 1 for DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
Figure 2 for DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
Figure 3 for DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
Figure 4 for DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
Viaarxiv icon

InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective

Add code
Oct 14, 2020
Figure 1 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 2 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 3 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 4 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Viaarxiv icon