Picture for Yinzhi Cao

Yinzhi Cao

Follow the Rules: Reasoning for Video Anomaly Detection with Large Language Models

Add code
Jul 14, 2024
Viaarxiv icon

PLeak: Prompt Leaking Attacks against Large Language Model Applications

Add code
May 14, 2024
Viaarxiv icon

TrustLLM: Trustworthiness in Large Language Models

Add code
Jan 25, 2024
Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

SneakyPrompt: Jailbreaking Text-to-image Generative Models

Add code
May 20, 2023
Figure 1 for SneakyPrompt: Jailbreaking Text-to-image Generative Models
Figure 2 for SneakyPrompt: Jailbreaking Text-to-image Generative Models
Figure 3 for SneakyPrompt: Jailbreaking Text-to-image Generative Models
Figure 4 for SneakyPrompt: Jailbreaking Text-to-image Generative Models
Viaarxiv icon

Addressing Heterogeneity in Federated Learning via Distributional Transformation

Add code
Oct 26, 2022
Viaarxiv icon

EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation

Add code
Feb 28, 2022
Figure 1 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Figure 2 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Figure 3 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Figure 4 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Viaarxiv icon

Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods

Add code
Mar 04, 2021
Figure 1 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Figure 2 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Figure 3 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Figure 4 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Viaarxiv icon

Practical Blind Membership Inference Attack via Differential Comparisons

Add code
Jan 07, 2021
Figure 1 for Practical Blind Membership Inference Attack via Differential Comparisons
Figure 2 for Practical Blind Membership Inference Attack via Differential Comparisons
Figure 3 for Practical Blind Membership Inference Attack via Differential Comparisons
Figure 4 for Practical Blind Membership Inference Attack via Differential Comparisons
Viaarxiv icon

PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning

Add code
Apr 12, 2020
Figure 1 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Figure 2 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Figure 3 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Figure 4 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Viaarxiv icon

Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems

Add code
Dec 16, 2017
Figure 1 for Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Figure 2 for Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Figure 3 for Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Figure 4 for Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Viaarxiv icon