Alert button
Picture for Jian Lou

Jian Lou

Alert button

Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off

Add code
Bookmark button
Alert button
Feb 10, 2024
Yuecheng Li, Tong Wang, Chuan Chen, Jian Lou, Bin Chen, Lei Yang, Zibin Zheng

Viaarxiv icon

Cross-silo Federated Learning with Record-level Personalized Differential Privacy

Add code
Bookmark button
Alert button
Jan 30, 2024
Junxu Liu, Jian Lou, Li Xiong, Jinfei Liu, Xiaofeng Meng

Viaarxiv icon

Contrastive Unlearning: A Contrastive Approach to Machine Unlearning

Add code
Bookmark button
Alert button
Jan 19, 2024
Hong kyu Lee, Qiuchen Zhang, Carl Yang, Jian Lou, Li Xiong

Viaarxiv icon

Prompt Valuation Based on Shapley Values

Add code
Bookmark button
Alert button
Dec 24, 2023
Hanxi Liu, Xiaokai Mao, Haocheng Xia, Jian Lou, Jinfei Liu

Viaarxiv icon

Signed Graph Neural Ordinary Differential Equation for Modeling Continuous-time Dynamics

Add code
Bookmark button
Alert button
Dec 18, 2023
Lanlan Chen, Kai Wu, Jian Lou, Jing Liu

Viaarxiv icon

Certified Minimax Unlearning with Generalization Rates and Deletion Capacity

Add code
Bookmark button
Alert button
Dec 16, 2023
Jiaqi Liu, Jian Lou, Zhan Qin, Kui Ren

Viaarxiv icon

Does Differential Privacy Prevent Backdoor Attacks in Practice?

Add code
Bookmark button
Alert button
Nov 10, 2023
Fereshteh Razmi, Jian Lou, Li Xiong

Figure 1 for Does Differential Privacy Prevent Backdoor Attacks in Practice?
Figure 2 for Does Differential Privacy Prevent Backdoor Attacks in Practice?
Figure 3 for Does Differential Privacy Prevent Backdoor Attacks in Practice?
Figure 4 for Does Differential Privacy Prevent Backdoor Attacks in Practice?
Viaarxiv icon

PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models

Add code
Bookmark button
Alert button
Oct 19, 2023
Hongwei Yao, Jian Lou, Zhan Qin

Figure 1 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 2 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 3 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 4 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Viaarxiv icon

RemovalNet: DNN Fingerprint Removal Attacks

Add code
Bookmark button
Alert button
Aug 31, 2023
Hongwei Yao, Zheng Li, Kunzhe Huang, Jian Lou, Zhan Qin, Kui Ren

Figure 1 for RemovalNet: DNN Fingerprint Removal Attacks
Figure 2 for RemovalNet: DNN Fingerprint Removal Attacks
Figure 3 for RemovalNet: DNN Fingerprint Removal Attacks
Figure 4 for RemovalNet: DNN Fingerprint Removal Attacks
Viaarxiv icon