Picture for Minghong Fang

Minghong Fang

Kevin

Adversarial Attacks to Multi-Modal Models

Add code
Sep 10, 2024
Viaarxiv icon

Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning

Add code
Jul 09, 2024
Viaarxiv icon

Byzantine-Robust Decentralized Federated Learning

Add code
Jun 18, 2024
Figure 1 for Byzantine-Robust Decentralized Federated Learning
Figure 2 for Byzantine-Robust Decentralized Federated Learning
Figure 3 for Byzantine-Robust Decentralized Federated Learning
Figure 4 for Byzantine-Robust Decentralized Federated Learning
Viaarxiv icon

Understanding Server-Assisted Federated Learning in the Presence of Incomplete Client Participation

Add code
May 04, 2024
Viaarxiv icon

Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction

Add code
Apr 22, 2024
Figure 1 for Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
Figure 2 for Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
Figure 3 for Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
Figure 4 for Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
Viaarxiv icon

Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks

Add code
Mar 05, 2024
Viaarxiv icon

GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis

Add code
Feb 21, 2024
Viaarxiv icon

Poisoning Federated Recommender Systems with Fake Users

Add code
Feb 18, 2024
Viaarxiv icon

Competitive Advantage Attacks to Decentralized Federated Learning

Add code
Oct 20, 2023
Viaarxiv icon

AFLGuard: Byzantine-robust Asynchronous Federated Learning

Add code
Dec 13, 2022
Viaarxiv icon