Alert button
Picture for Ege Erdogan

Ege Erdogan

Alert button

Poisoning $\times$ Evasion: Symbiotic Adversarial Robustness for Graph Neural Networks

Add code
Bookmark button
Alert button
Dec 09, 2023
Ege Erdogan, Simon Geisler, Stephan Günnemann

Viaarxiv icon

Detecting ChatGPT: A Survey of the State of Detecting ChatGPT-Generated Text

Add code
Bookmark button
Alert button
Sep 14, 2023
Mahdi Dhaini, Wessel Poelman, Ege Erdogan

Viaarxiv icon

Defense Mechanisms Against Training-Hijacking Attacks in Split Learning

Add code
Bookmark button
Alert button
Feb 16, 2023
Ege Erdogan, Unat Teksen, Mehmet Salih Celiktenyildiz, Alptekin Kupcu, A. Ercument Cicek

Figure 1 for Defense Mechanisms Against Training-Hijacking Attacks in Split Learning
Figure 2 for Defense Mechanisms Against Training-Hijacking Attacks in Split Learning
Figure 3 for Defense Mechanisms Against Training-Hijacking Attacks in Split Learning
Figure 4 for Defense Mechanisms Against Training-Hijacking Attacks in Split Learning
Viaarxiv icon

SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning

Add code
Bookmark button
Alert button
Aug 23, 2021
Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek

Figure 1 for SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning
Figure 2 for SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning
Figure 3 for SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning
Figure 4 for SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning
Viaarxiv icon

UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning

Add code
Bookmark button
Alert button
Aug 20, 2021
Ege Erdogan, Alptekin Kupcu, A. Ercument Cicek

Figure 1 for UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Figure 2 for UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Figure 3 for UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Figure 4 for UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Viaarxiv icon