Alert button
Picture for Peihan Liu

Peihan Liu

Alert button

Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training

Add code
Bookmark button
Alert button
Jun 13, 2023
Alyssa Huang, Peihan Liu, Ryumei Nakada, Linjun Zhang, Wanrong Zhang

Figure 1 for Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training
Figure 2 for Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training
Figure 3 for Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training
Figure 4 for Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training
Viaarxiv icon

Improving Adversarial Transferability with Scheduled Step Size and Dual Example

Add code
Bookmark button
Alert button
Jan 30, 2023
Zeliang Zhang, Peihan Liu, Xiaosen Wang, Chenliang Xu

Figure 1 for Improving Adversarial Transferability with Scheduled Step Size and Dual Example
Figure 2 for Improving Adversarial Transferability with Scheduled Step Size and Dual Example
Figure 3 for Improving Adversarial Transferability with Scheduled Step Size and Dual Example
Figure 4 for Improving Adversarial Transferability with Scheduled Step Size and Dual Example
Viaarxiv icon

How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies

Add code
Bookmark button
Alert button
Jul 12, 2022
Edward Small, Wei Shao, Zeliang Zhang, Peihan Liu, Jeffrey Chan, Kacper Sokol, Flora Salim

Figure 1 for How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies
Figure 2 for How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies
Figure 3 for How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies
Figure 4 for How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies
Viaarxiv icon