Alert button
Picture for Hongcheng Gao

Hongcheng Gao

Alert button

Universal Prompt Optimizer for Safe Text-to-Image Generation

Add code
Bookmark button
Alert button
Feb 16, 2024
Zongyu Wu, Hongcheng Gao, Yueze Wang, Xiang Zhang, Suhang Wang

Viaarxiv icon

Generative Pretraining in Multimodality

Add code
Bookmark button
Alert button
Jul 11, 2023
Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, Xinlong Wang

Figure 1 for Generative Pretraining in Multimodality
Figure 2 for Generative Pretraining in Multimodality
Figure 3 for Generative Pretraining in Multimodality
Figure 4 for Generative Pretraining in Multimodality
Viaarxiv icon

Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks

Add code
Bookmark button
Alert button
Jun 16, 2023
Hongcheng Gao, Hao Zhang, Yinpeng Dong, Zhijie Deng

Figure 1 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Figure 2 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Figure 3 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Figure 4 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Viaarxiv icon

Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations

Add code
Bookmark button
Alert button
Jun 07, 2023
Lifan Yuan, Yangyi Chen, Ganqu Cui, Hongcheng Gao, Fangyuan Zou, Xingyi Cheng, Heng Ji, Zhiyuan Liu, Maosong Sun

Figure 1 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Figure 2 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Figure 3 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Figure 4 for Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations
Viaarxiv icon

From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework

Add code
Bookmark button
Alert button
May 29, 2023
Yangyi Chen, Hongcheng Gao, Ganqu Cui, Lifan Yuan, Dehan Kong, Hanlu Wu, Ning Shi, Bo Yuan, Longtao Huang, Hui Xue, Zhiyuan Liu, Maosong Sun, Heng Ji

Figure 1 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Figure 2 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Figure 3 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Figure 4 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Viaarxiv icon

Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model

Add code
Bookmark button
Alert button
May 26, 2023
Zhijie Deng, Hongcheng Gao, Yibo Miao, Hao Zhang

Figure 1 for Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model
Figure 2 for Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model
Figure 3 for Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model
Figure 4 for Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model
Viaarxiv icon

Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP

Add code
Bookmark button
Alert button
Oct 19, 2022
Yangyi Chen, Hongcheng Gao, Ganqu Cui, Fanchao Qi, Longtao Huang, Zhiyuan Liu, Maosong Sun

Figure 1 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Figure 2 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Figure 3 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Figure 4 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Viaarxiv icon

Exploring the Universal Vulnerability of Prompt-based Learning Paradigm

Add code
Bookmark button
Alert button
Apr 11, 2022
Lei Xu, Yangyi Chen, Ganqu Cui, Hongcheng Gao, Zhiyuan Liu

Figure 1 for Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Figure 2 for Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Figure 3 for Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Figure 4 for Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Viaarxiv icon