Picture for Eldor Abdukhamidov

Eldor Abdukhamidov

Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks

Add code
Jul 21, 2023
Figure 1 for Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks
Figure 2 for Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks
Figure 3 for Unveiling Vulnerabilities in Interpretable Deep Learning Systems with Query-Efficient Black-box Attacks
Viaarxiv icon

Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems

Add code
Jul 13, 2023
Figure 1 for Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems
Figure 2 for Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems
Figure 3 for Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems
Figure 4 for Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems
Viaarxiv icon

Single-Class Target-Specific Attack against Interpretable Deep Learning Systems

Add code
Jul 12, 2023
Figure 1 for Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Figure 2 for Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Figure 3 for Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Figure 4 for Single-Class Target-Specific Attack against Interpretable Deep Learning Systems
Viaarxiv icon

Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning

Add code
Nov 29, 2022
Figure 1 for Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Figure 2 for Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Figure 3 for Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Figure 4 for Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Viaarxiv icon