Alert button
Picture for Yinzhi Cao

Yinzhi Cao

Alert button

TrustLLM: Trustworthiness in Large Language Models

Add code
Bookmark button
Alert button
Jan 25, 2024
Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao

Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' Safety Filters

Add code
Bookmark button
Alert button
May 20, 2023
Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, Yinzhi Cao

Figure 1 for SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' Safety Filters
Figure 2 for SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' Safety Filters
Figure 3 for SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' Safety Filters
Figure 4 for SneakyPrompt: Evaluating Robustness of Text-to-image Generative Models' Safety Filters
Viaarxiv icon

Addressing Heterogeneity in Federated Learning via Distributional Transformation

Add code
Bookmark button
Alert button
Oct 26, 2022
Haolin Yuan, Bo Hui, Yuchen Yang, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao

Viaarxiv icon

EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation

Add code
Bookmark button
Alert button
Feb 28, 2022
Haolin Yuan, Armin Hadzic, William Paul, Daniella Villegas de Flores, Philip Mathew, John Aucott, Yinzhi Cao, Philippe Burlina

Figure 1 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Figure 2 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Figure 3 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Figure 4 for EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation
Viaarxiv icon

Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods

Add code
Bookmark button
Alert button
Mar 04, 2021
William Paul, Yinzhi Cao, Miaomiao Zhang, Phil Burlina

Figure 1 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Figure 2 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Figure 3 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Figure 4 for Defending Medical Image Diagnostics against Privacy Attacks using Generative Methods
Viaarxiv icon

Practical Blind Membership Inference Attack via Differential Comparisons

Add code
Bookmark button
Alert button
Jan 07, 2021
Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao

Figure 1 for Practical Blind Membership Inference Attack via Differential Comparisons
Figure 2 for Practical Blind Membership Inference Attack via Differential Comparisons
Figure 3 for Practical Blind Membership Inference Attack via Differential Comparisons
Figure 4 for Practical Blind Membership Inference Attack via Differential Comparisons
Viaarxiv icon

PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 12, 2020
Chenglin Yang, Adam Kortylewski, Cihang Xie, Yinzhi Cao, Alan Yuille

Figure 1 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Figure 2 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Figure 3 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Figure 4 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Viaarxiv icon

Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems

Add code
Bookmark button
Alert button
Dec 16, 2017
Kexin Pei, Yinzhi Cao, Junfeng Yang, Suman Jana

Figure 1 for Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Figure 2 for Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Figure 3 for Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Figure 4 for Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
Viaarxiv icon