Picture for Fengli Zhang

Fengli Zhang

How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution

Add code
Nov 16, 2024
Figure 1 for How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution
Figure 2 for How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution
Figure 3 for How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution
Figure 4 for How to Defend Against Large-scale Model Poisoning Attacks in Federated Learning: A Vertical Solution
Viaarxiv icon