Picture for Neil Zhenqiang Gong

Neil Zhenqiang Gong

Fanny

Link Stealing Attacks Against Inductive Graph Neural Networks

Add code
May 09, 2024
Figure 1 for Link Stealing Attacks Against Inductive Graph Neural Networks
Figure 2 for Link Stealing Attacks Against Inductive Graph Neural Networks
Figure 3 for Link Stealing Attacks Against Inductive Graph Neural Networks
Figure 4 for Link Stealing Attacks Against Inductive Graph Neural Networks
Viaarxiv icon

SoK: Gradient Leakage in Federated Learning

Add code
Apr 08, 2024
Viaarxiv icon

Watermark-based Detection and Attribution of AI-Generated Content

Add code
Apr 05, 2024
Figure 1 for Watermark-based Detection and Attribution of AI-Generated Content
Figure 2 for Watermark-based Detection and Attribution of AI-Generated Content
Figure 3 for Watermark-based Detection and Attribution of AI-Generated Content
Figure 4 for Watermark-based Detection and Attribution of AI-Generated Content
Viaarxiv icon

Optimization-based Prompt Injection Attack to LLM-as-a-Judge

Add code
Mar 26, 2024
Figure 1 for Optimization-based Prompt Injection Attack to LLM-as-a-Judge
Figure 2 for Optimization-based Prompt Injection Attack to LLM-as-a-Judge
Figure 3 for Optimization-based Prompt Injection Attack to LLM-as-a-Judge
Figure 4 for Optimization-based Prompt Injection Attack to LLM-as-a-Judge
Viaarxiv icon

Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks

Add code
Mar 05, 2024
Figure 1 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 2 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 3 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Figure 4 for Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Viaarxiv icon

Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models

Add code
Feb 22, 2024
Viaarxiv icon

Visual Hallucinations of Multi-modal Large Language Models

Add code
Feb 22, 2024
Figure 1 for Visual Hallucinations of Multi-modal Large Language Models
Figure 2 for Visual Hallucinations of Multi-modal Large Language Models
Figure 3 for Visual Hallucinations of Multi-modal Large Language Models
Figure 4 for Visual Hallucinations of Multi-modal Large Language Models
Viaarxiv icon

Poisoning Federated Recommender Systems with Fake Users

Add code
Feb 18, 2024
Viaarxiv icon

TrustLLM: Trustworthiness in Large Language Models

Add code
Jan 25, 2024
Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents

Add code
Dec 03, 2023
Figure 1 for Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents
Figure 2 for Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents
Figure 3 for Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents
Figure 4 for Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents
Viaarxiv icon