Alert button
Picture for Boxin Wang

Boxin Wang

Alert button

InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining

Add code
Bookmark button
Alert button
Oct 11, 2023
Boxin Wang, Wei Ping, Lawrence McAfee, Peng Xu, Bo Li, Mohammad Shoeybi, Bryan Catanzaro

Figure 1 for InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining
Figure 2 for InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining
Figure 3 for InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining
Figure 4 for InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining
Viaarxiv icon

DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models

Add code
Bookmark button
Alert button
Jun 20, 2023
Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li

Figure 1 for DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Figure 2 for DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Figure 3 for DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Figure 4 for DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
Viaarxiv icon

Can Public Large Language Models Help Private Cross-device Federated Learning?

Add code
Bookmark button
Alert button
May 20, 2023
Boxin Wang, Yibo Jacky Zhang, Yuan Cao, Bo Li, H. Brendan McMahan, Sewoong Oh, Zheng Xu, Manzil Zaheer

Figure 1 for Can Public Large Language Models Help Private Cross-device Federated Learning?
Figure 2 for Can Public Large Language Models Help Private Cross-device Federated Learning?
Figure 3 for Can Public Large Language Models Help Private Cross-device Federated Learning?
Figure 4 for Can Public Large Language Models Help Private Cross-device Federated Learning?
Viaarxiv icon

Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study

Add code
Bookmark button
Alert button
Apr 13, 2023
Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandkumar, Bryan Catanzaro

Figure 1 for Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study
Figure 2 for Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study
Figure 3 for Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study
Figure 4 for Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study
Viaarxiv icon

FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data

Add code
Bookmark button
Alert button
Jul 21, 2022
Wenda Chu, Chulin Xie, Boxin Wang, Linyi Li, Lang Yin, Han Zhao, Bo Li

Figure 1 for FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
Figure 2 for FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
Viaarxiv icon

SemAttack: Natural Textual Attacks via Different Semantic Spaces

Add code
Bookmark button
Alert button
May 16, 2022
Boxin Wang, Chejian Xu, Xiangyu Liu, Yu Cheng, Bo Li

Figure 1 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 2 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 3 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 4 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Viaarxiv icon

Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models

Add code
Bookmark button
Alert button
Feb 08, 2022
Boxin Wang, Wei Ping, Chaowei Xiao, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bo Li, Anima Anandkumar, Bryan Catanzaro

Figure 1 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Figure 2 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Figure 3 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Figure 4 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Viaarxiv icon

Certifying Out-of-Domain Generalization for Blackbox Functions

Add code
Bookmark button
Alert button
Feb 03, 2022
Maurice Weber, Linyi Li, Boxin Wang, Zhikuan Zhao, Bo Li, Ce Zhang

Figure 1 for Certifying Out-of-Domain Generalization for Blackbox Functions
Figure 2 for Certifying Out-of-Domain Generalization for Blackbox Functions
Figure 3 for Certifying Out-of-Domain Generalization for Blackbox Functions
Figure 4 for Certifying Out-of-Domain Generalization for Blackbox Functions
Viaarxiv icon

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

Add code
Bookmark button
Alert button
Nov 04, 2021
Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li

Figure 1 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 2 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 3 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 4 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Viaarxiv icon