Alert button
Picture for Utku Ozbulak

Utku Ozbulak

Alert button

Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training

Add code
Bookmark button
Alert button
May 23, 2023
Utku Ozbulak, Hyun Jung Lee, Beril Boga, Esla Timothy Anzaku, Homin Park, Arnout Van Messem, Wesley De Neve, Joris Vankerschaver

Figure 1 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 2 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 3 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Figure 4 for Know Your Self-supervised Learning: A Survey on Image-based Generative and Discriminative Training
Viaarxiv icon

Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data

Add code
Bookmark button
Alert button
Dec 12, 2022
Utku Ozbulak, Solha Kang, Jasper Zuallaert, Stephen Depuydt, Joris Vankerschaver

Figure 1 for Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data
Figure 2 for Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data
Figure 3 for Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data
Figure 4 for Utilizing Mutations to Evaluate Interpretability of Neural Networks on Genomic Data
Viaarxiv icon

Exact Feature Collisions in Neural Networks

Add code
Bookmark button
Alert button
May 31, 2022
Utku Ozbulak, Manvel Gasparyan, Shodhan Rao, Wesley De Neve, Arnout Van Messem

Figure 1 for Exact Feature Collisions in Neural Networks
Figure 2 for Exact Feature Collisions in Neural Networks
Figure 3 for Exact Feature Collisions in Neural Networks
Figure 4 for Exact Feature Collisions in Neural Networks
Viaarxiv icon

Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes

Add code
Bookmark button
Alert button
Nov 22, 2021
Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve

Figure 1 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 2 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 3 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Figure 4 for Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes
Viaarxiv icon

Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks

Add code
Bookmark button
Alert button
Jun 16, 2021
Utku Ozbulak, Esla Timothy Anzaku, Wesley De Neve, Arnout Van Messem

Figure 1 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 2 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 3 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Figure 4 for Selection of Source Images Heavily Influences the Effectiveness of Adversarial Attacks
Viaarxiv icon

Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems

Add code
Bookmark button
Alert button
Jan 26, 2021
Utku Ozbulak, Baptist Vandersmissen, Azarakhsh Jalalvand, Ivo Couckuyt, Arnout Van Messem, Wesley De Neve

Figure 1 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 2 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 3 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Figure 4 for Investigating the significance of adversarial attacks and their relation to interpretability for radar-based human activity recognition systems
Viaarxiv icon

Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability

Add code
Bookmark button
Alert button
Jul 07, 2020
Utku Ozbulak, Jonathan Peck, Wesley De Neve, Bart Goossens, Yvan Saeys, Arnout Van Messem

Figure 1 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 2 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 3 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Figure 4 for Regional Image Perturbation Reduces $L_p$ Norms of Adversarial Examples While Maintaining Model-to-model Transferability
Viaarxiv icon

Perturbation Analysis of Gradient-based Adversarial Attacks

Add code
Bookmark button
Alert button
Jun 02, 2020
Utku Ozbulak, Manvel Gasparyan, Wesley De Neve, Arnout Van Messem

Figure 1 for Perturbation Analysis of Gradient-based Adversarial Attacks
Figure 2 for Perturbation Analysis of Gradient-based Adversarial Attacks
Figure 3 for Perturbation Analysis of Gradient-based Adversarial Attacks
Figure 4 for Perturbation Analysis of Gradient-based Adversarial Attacks
Viaarxiv icon

Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

Add code
Bookmark button
Alert button
Jul 30, 2019
Utku Ozbulak, Arnout Van Messem, Wesley De Neve

Figure 1 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Figure 2 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Figure 3 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Figure 4 for Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation
Viaarxiv icon

Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding

Add code
Bookmark button
Alert button
Jul 30, 2019
Utku Ozbulak, Arnout Van Messem, Wesley De Neve

Figure 1 for Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding
Figure 2 for Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding
Figure 3 for Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding
Figure 4 for Not All Adversarial Examples Require a Complex Defense: Identifying Over-optimized Adversarial Examples with IQR-based Logit Thresholding
Viaarxiv icon