Alert button
Picture for Fangzhao Wu

Fangzhao Wu

Alert button

Benchmarking and Defending Against Indirect Prompt Injection Attacks on Large Language Models

Add code
Bookmark button
Alert button
Dec 21, 2023
Jingwei Yi, Yueqi Xie, Bin Zhu, Keegan Hines, Emre Kiciman, Guangzhong Sun, Xing Xie, Fangzhao Wu

Viaarxiv icon

Towards Attack-tolerant Federated Learning via Critical Parameter Analysis

Add code
Bookmark button
Alert button
Aug 18, 2023
Sungwon Han, Sungwon Park, Fangzhao Wu, Sundong Kim, Bin Zhu, Xing Xie, Meeyoung Cha

Figure 1 for Towards Attack-tolerant Federated Learning via Critical Parameter Analysis
Figure 2 for Towards Attack-tolerant Federated Learning via Critical Parameter Analysis
Figure 3 for Towards Attack-tolerant Federated Learning via Critical Parameter Analysis
Figure 4 for Towards Attack-tolerant Federated Learning via Critical Parameter Analysis
Viaarxiv icon

FedDefender: Client-Side Attack-Tolerant Federated Learning

Add code
Bookmark button
Alert button
Jul 18, 2023
Sungwon Park, Sungwon Han, Fangzhao Wu, Sundong Kim, Bin Zhu, Xing Xie, Meeyoung Cha

Figure 1 for FedDefender: Client-Side Attack-Tolerant Federated Learning
Figure 2 for FedDefender: Client-Side Attack-Tolerant Federated Learning
Figure 3 for FedDefender: Client-Side Attack-Tolerant Federated Learning
Figure 4 for FedDefender: Client-Side Attack-Tolerant Federated Learning
Viaarxiv icon

FedSampling: A Better Sampling Strategy for Federated Learning

Add code
Bookmark button
Alert button
Jun 25, 2023
Tao Qi, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, Xing Xie

Figure 1 for FedSampling: A Better Sampling Strategy for Federated Learning
Figure 2 for FedSampling: A Better Sampling Strategy for Federated Learning
Figure 3 for FedSampling: A Better Sampling Strategy for Federated Learning
Figure 4 for FedSampling: A Better Sampling Strategy for Federated Learning
Viaarxiv icon

Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark

Add code
Bookmark button
Alert button
May 17, 2023
Wenjun Peng, Jingwei Yi, Fangzhao Wu, Shangxi Wu, Bin Zhu, Lingjuan Lyu, Binxing Jiao, Tong Xu, Guangzhong Sun, Xing Xie

Figure 1 for Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark
Figure 2 for Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark
Figure 3 for Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark
Figure 4 for Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark
Viaarxiv icon

Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good Teacher

Add code
Bookmark button
Alert button
Apr 25, 2023
Jiawei Shao, Fangzhao Wu, Jun Zhang

Figure 1 for Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good Teacher
Figure 2 for Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good Teacher
Figure 3 for Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good Teacher
Figure 4 for Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good Teacher
Viaarxiv icon

DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision

Add code
Bookmark button
Alert button
Mar 15, 2023
Sungwon Han, Seungeon Lee, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xiting Wang, Xing Xie, Meeyoung Cha

Figure 1 for DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision
Figure 2 for DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision
Figure 3 for DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision
Figure 4 for DualFair: Fair Representation Learning at Both Group and Individual Levels via Contrastive Self-supervision
Viaarxiv icon

Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias

Add code
Bookmark button
Alert button
Mar 01, 2023
Shangxi Wu, Qiuyang He, Fangzhao Wu, Jitao Sang, Yaowei Wang, Changsheng Xu

Figure 1 for Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias
Figure 2 for Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias
Figure 3 for Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias
Figure 4 for Backdoor for Debias: Mitigating Model Bias with Backdoor Attack-based Artificial Bias
Viaarxiv icon