Alert button
Picture for Neil Zhenqiang Gong

Neil Zhenqiang Gong

Alert button

SoK: Gradient Leakage in Federated Learning

Add code
Bookmark button
Alert button
Apr 08, 2024
Jiacheng Du, Jiahui Hu, Zhibo Wang, Peng Sun, Neil Zhenqiang Gong, Kui Ren

Viaarxiv icon

Watermark-based Detection and Attribution of AI-Generated Content

Add code
Bookmark button
Alert button
Apr 05, 2024
Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, Neil Zhenqiang Gong

Viaarxiv icon

Optimization-based Prompt Injection Attack to LLM-as-a-Judge

Add code
Bookmark button
Alert button
Mar 26, 2024
Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong

Viaarxiv icon

Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks

Add code
Bookmark button
Alert button
Mar 05, 2024
Yichang Xu, Ming Yin, Minghong Fang, Neil Zhenqiang Gong

Figure 1 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 2 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 3 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 4 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Viaarxiv icon

Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

Add code
Bookmark button
Alert button
Feb 22, 2024
Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong

Viaarxiv icon

Visual Hallucinations of Multi-modal Large Language Models

Add code
Bookmark button
Alert button
Feb 22, 2024
Wen Huang, Hongbin Liu, Minxin Guo, Neil Zhenqiang Gong

Viaarxiv icon

Poisoning Federated Recommender Systems with Fake Users

Add code
Bookmark button
Alert button
Feb 18, 2024
Ming Yin, Yichang Xu, Minghong Fang, Neil Zhenqiang Gong

Viaarxiv icon

TrustLLM: Trustworthiness in Large Language Models

Add code
Bookmark button
Alert button
Jan 25, 2024
Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao

Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

Add code
Bookmark button
Alert button
Dec 03, 2023
Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen

Viaarxiv icon