Picture for Yuan Hong

Yuan Hong

Illinois Institute of Technology, IL, United States

An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection

Add code
Jun 10, 2024
Viaarxiv icon

LMO-DP: Optimizing the Randomization Mechanism for Differentially Private Fine-Tuning (Large) Language Models

Add code
May 29, 2024
Viaarxiv icon

Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness

Add code
May 25, 2024
Viaarxiv icon

On the Faithfulness of Vision Transformer Explanations

Add code
Apr 01, 2024
Viaarxiv icon

Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks

Add code
Mar 04, 2024
Figure 1 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Figure 2 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Figure 3 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Figure 4 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Viaarxiv icon

FLTracer: Accurate Poisoning Attack Provenance in Federated Learning

Add code
Oct 20, 2023
Figure 1 for FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
Figure 2 for FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
Figure 3 for FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
Figure 4 for FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
Viaarxiv icon

Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

Add code
Jul 31, 2023
Figure 1 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 2 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 3 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 4 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Viaarxiv icon

Certifiable Black-Box Attack: Ensuring Provably Successful Attack for Adversarial Examples

Add code
Apr 10, 2023
Figure 1 for Certifiable Black-Box Attack: Ensuring Provably Successful Attack for Adversarial Examples
Figure 2 for Certifiable Black-Box Attack: Ensuring Provably Successful Attack for Adversarial Examples
Figure 3 for Certifiable Black-Box Attack: Ensuring Provably Successful Attack for Adversarial Examples
Figure 4 for Certifiable Black-Box Attack: Ensuring Provably Successful Attack for Adversarial Examples
Viaarxiv icon

OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization

Add code
Oct 04, 2022
Figure 1 for OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization
Figure 2 for OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization
Figure 3 for OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization
Figure 4 for OpBoost: A Vertical Federated Tree Boosting Framework Based on Order-Preserving Desensitization
Viaarxiv icon

On Fair Classification with Mostly Private Sensitive Attributes

Add code
Jul 18, 2022
Figure 1 for On Fair Classification with Mostly Private Sensitive Attributes
Figure 2 for On Fair Classification with Mostly Private Sensitive Attributes
Figure 3 for On Fair Classification with Mostly Private Sensitive Attributes
Figure 4 for On Fair Classification with Mostly Private Sensitive Attributes
Viaarxiv icon