Alert button
Picture for Neil Zhenqiang Gong

Neil Zhenqiang Gong

Alert button

Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks

Mar 05, 2024
Yichang Xu, Ming Yin, Minghong Fang, Neil Zhenqiang Gong

Figure 1 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 2 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 3 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 4 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Viaarxiv icon

Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

Feb 22, 2024
Hongbin Liu, Michael K. Reiter, Neil Zhenqiang Gong

Viaarxiv icon

Visual Hallucinations of Multi-modal Large Language Models

Feb 22, 2024
Wen Huang, Hongbin Liu, Minxin Guo, Neil Zhenqiang Gong

Viaarxiv icon

Poisoning Federated Recommender Systems with Fake Users

Feb 18, 2024
Ming Yin, Yichang Xu, Minghong Fang, Neil Zhenqiang Gong

Viaarxiv icon

TrustLLM: Trustworthiness in Large Language Models

Jan 25, 2024
Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao

Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

Dec 03, 2023
Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen

Viaarxiv icon

Competitive Advantage Attacks to Decentralized Federated Learning

Oct 20, 2023
Yuqi Jia, Minghong Fang, Neil Zhenqiang Gong

Figure 1 for Competitive Advantage Attacks to Decentralized Federated Learning
Figure 2 for Competitive Advantage Attacks to Decentralized Federated Learning
Figure 3 for Competitive Advantage Attacks to Decentralized Federated Learning
Figure 4 for Competitive Advantage Attacks to Decentralized Federated Learning
Viaarxiv icon

Prompt Injection Attacks and Defenses in LLM-Integrated Applications

Oct 19, 2023
Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong

Viaarxiv icon

MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use

Oct 12, 2023
Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, Lichao Sun

Figure 1 for MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use
Figure 2 for MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use
Figure 3 for MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use
Figure 4 for MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use
Viaarxiv icon