Alert button
Picture for Yangyi Chen

Yangyi Chen

Alert button

From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework

Add code
Bookmark button
Alert button
May 29, 2023
Yangyi Chen, Hongcheng Gao, Ganqu Cui, Lifan Yuan, Dehan Kong, Hanlu Wu, Ning Shi, Bo Yuan, Longtao Huang, Hui Xue, Zhiyuan Liu, Maosong Sun, Heng Ji

Figure 1 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Figure 2 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Figure 3 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Figure 4 for From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Viaarxiv icon

A Close Look into the Calibration of Pre-trained Language Models

Add code
Bookmark button
Alert button
Oct 31, 2022
Yangyi Chen, Lifan Yuan, Ganqu Cui, Zhiyuan Liu, Heng Ji

Figure 1 for A Close Look into the Calibration of Pre-trained Language Models
Figure 2 for A Close Look into the Calibration of Pre-trained Language Models
Figure 3 for A Close Look into the Calibration of Pre-trained Language Models
Figure 4 for A Close Look into the Calibration of Pre-trained Language Models
Viaarxiv icon

Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP

Add code
Bookmark button
Alert button
Oct 19, 2022
Yangyi Chen, Hongcheng Gao, Ganqu Cui, Fanchao Qi, Longtao Huang, Zhiyuan Liu, Maosong Sun

Figure 1 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Figure 2 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Figure 3 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Figure 4 for Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Viaarxiv icon

A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks

Add code
Bookmark button
Alert button
Jun 17, 2022
Ganqu Cui, Lifan Yuan, Bingxiang He, Yangyi Chen, Zhiyuan Liu, Maosong Sun

Figure 1 for A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
Figure 2 for A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
Figure 3 for A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
Figure 4 for A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks
Viaarxiv icon

Exploring the Universal Vulnerability of Prompt-based Learning Paradigm

Add code
Bookmark button
Alert button
Apr 11, 2022
Lei Xu, Yangyi Chen, Ganqu Cui, Hongcheng Gao, Zhiyuan Liu

Figure 1 for Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Figure 2 for Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Figure 3 for Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Figure 4 for Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Viaarxiv icon

Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework

Add code
Bookmark button
Alert button
Nov 16, 2021
Lifan Yuan, Yichi Zhang, Yangyi Chen, Wei Wei

Figure 1 for Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework
Figure 2 for Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework
Figure 3 for Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework
Figure 4 for Bridge the Gap Between CV and NLP! A Gradient-based Textual Adversarial Attack Framework
Viaarxiv icon

Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks

Add code
Bookmark button
Alert button
Oct 15, 2021
Yangyi Chen, Fanchao Qi, Zhiyuan Liu, Maosong Sun

Figure 1 for Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks
Figure 2 for Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks
Figure 3 for Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks
Viaarxiv icon

Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer

Add code
Bookmark button
Alert button
Oct 14, 2021
Fanchao Qi, Yangyi Chen, Xurui Zhang, Mukai Li, Zhiyuan Liu, Maosong Sun

Figure 1 for Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Figure 2 for Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Figure 3 for Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Figure 4 for Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Viaarxiv icon

Multi-granularity Textual Adversarial Attack with Behavior Cloning

Add code
Bookmark button
Alert button
Sep 09, 2021
Yangyi Chen, Jin Su, Wei Wei

Figure 1 for Multi-granularity Textual Adversarial Attack with Behavior Cloning
Figure 2 for Multi-granularity Textual Adversarial Attack with Behavior Cloning
Figure 3 for Multi-granularity Textual Adversarial Attack with Behavior Cloning
Figure 4 for Multi-granularity Textual Adversarial Attack with Behavior Cloning
Viaarxiv icon