Alert button
Picture for Phillip Rieger

Phillip Rieger

Alert button

Technical University Darmstadt

FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning

Add code
Bookmark button
Alert button
Dec 07, 2023
Hossein Fereidooni, Alessandro Pegoraro, Phillip Rieger, Alexandra Dmitrienko, Ahmad-Reza Sadeghi

Figure 1 for FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning
Figure 2 for FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning
Figure 3 for FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning
Figure 4 for FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning Attacks in Federated Learning
Viaarxiv icon

FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks

Add code
Bookmark button
Alert button
Oct 03, 2023
Jorge Castillo, Phillip Rieger, Hossein Fereidooni, Qian Chen, Ahmad Sadeghi

Viaarxiv icon

ARGUS: Context-Based Detection of Stealthy IoT Infiltration Attacks

Add code
Bookmark button
Alert button
Feb 16, 2023
Phillip Rieger, Marco Chilese, Reham Mohamed, Markus Miettinen, Hossein Fereidooni, Ahmad-Reza Sadeghi

Figure 1 for ARGUS: Context-Based Detection of Stealthy IoT Infiltration Attacks
Figure 2 for ARGUS: Context-Based Detection of Stealthy IoT Infiltration Attacks
Figure 3 for ARGUS: Context-Based Detection of Stealthy IoT Infiltration Attacks
Figure 4 for ARGUS: Context-Based Detection of Stealthy IoT Infiltration Attacks
Viaarxiv icon

BayBFed: Bayesian Backdoor Defense for Federated Learning

Add code
Bookmark button
Alert button
Jan 23, 2023
Kavita Kumari, Phillip Rieger, Hossein Fereidooni, Murtuza Jadliwala, Ahmad-Reza Sadeghi

Figure 1 for BayBFed: Bayesian Backdoor Defense for Federated Learning
Figure 2 for BayBFed: Bayesian Backdoor Defense for Federated Learning
Figure 3 for BayBFed: Bayesian Backdoor Defense for Federated Learning
Figure 4 for BayBFed: Bayesian Backdoor Defense for Federated Learning
Viaarxiv icon

Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis

Add code
Bookmark button
Alert button
Oct 14, 2022
Phillip Rieger, Torsten Krauß, Markus Miettinen, Alexandra Dmitrienko, Ahmad-Reza Sadeghi

Figure 1 for Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis
Figure 2 for Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis
Figure 3 for Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis
Figure 4 for Close the Gate: Detecting Backdoored Models in Federated Learning based on Client-Side Deep Layer Output Analysis
Viaarxiv icon

DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection

Add code
Bookmark button
Alert button
Jan 03, 2022
Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, Ahmad-Reza Sadeghi

Figure 1 for DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
Figure 2 for DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
Figure 3 for DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
Figure 4 for DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection
Viaarxiv icon