Picture for Boxin Wang

Boxin Wang

NVLM: Open Frontier-Class Multimodal LLMs

Add code
Sep 17, 2024
Figure 1 for NVLM: Open Frontier-Class Multimodal LLMs
Figure 2 for NVLM: Open Frontier-Class Multimodal LLMs
Figure 3 for NVLM: Open Frontier-Class Multimodal LLMs
Figure 4 for NVLM: Open Frontier-Class Multimodal LLMs
Viaarxiv icon

RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs

Add code
Jul 02, 2024
Viaarxiv icon

InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining

Add code
Oct 11, 2023
Viaarxiv icon

DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models

Add code
Jun 20, 2023
Viaarxiv icon

Can Public Large Language Models Help Private Cross-device Federated Learning?

Add code
May 20, 2023
Viaarxiv icon

Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study

Add code
Apr 13, 2023
Viaarxiv icon

FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data

Add code
Jul 21, 2022
Figure 1 for FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
Figure 2 for FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
Viaarxiv icon

SemAttack: Natural Textual Attacks via Different Semantic Spaces

Add code
May 16, 2022
Figure 1 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 2 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 3 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 4 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Viaarxiv icon

Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models

Add code
Feb 08, 2022
Figure 1 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Figure 2 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Figure 3 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Figure 4 for Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Viaarxiv icon

Certifying Out-of-Domain Generalization for Blackbox Functions

Add code
Feb 03, 2022
Figure 1 for Certifying Out-of-Domain Generalization for Blackbox Functions
Figure 2 for Certifying Out-of-Domain Generalization for Blackbox Functions
Figure 3 for Certifying Out-of-Domain Generalization for Blackbox Functions
Figure 4 for Certifying Out-of-Domain Generalization for Blackbox Functions
Viaarxiv icon