Picture for Hongcheng Gao

Hongcheng Gao

Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?

Add code
Jul 15, 2024
Figure 1 for Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?
Figure 2 for Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?
Figure 3 for Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?
Figure 4 for Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?
Viaarxiv icon

AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models

Add code
Jun 19, 2024
Figure 1 for AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models
Figure 2 for AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models
Figure 3 for AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models
Figure 4 for AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models
Viaarxiv icon

Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities

Add code
Jun 18, 2024
Viaarxiv icon

Universal Prompt Optimizer for Safe Text-to-Image Generation

Add code
Feb 16, 2024
Viaarxiv icon

Generative Pretraining in Multimodality

Add code
Jul 11, 2023
Figure 1 for Generative Pretraining in Multimodality
Figure 2 for Generative Pretraining in Multimodality
Figure 3 for Generative Pretraining in Multimodality
Figure 4 for Generative Pretraining in Multimodality
Viaarxiv icon

Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks

Add code
Jun 16, 2023
Figure 1 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Figure 2 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Figure 3 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Figure 4 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Viaarxiv icon

Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations

Add code
Jun 07, 2023
Figure 1 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Figure 2 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Figure 3 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Figure 4 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Viaarxiv icon

From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework

Add code
May 29, 2023
Figure 1 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Figure 2 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Figure 3 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Figure 4 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Viaarxiv icon

Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model

Add code
May 26, 2023
Figure 1 for Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model
Figure 2 for Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model
Figure 3 for Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model
Figure 4 for Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model
Viaarxiv icon

Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP

Add code
Oct 19, 2022
Figure 1 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Figure 2 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Figure 3 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Figure 4 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Viaarxiv icon