Alert button
Picture for Wieland Brendel

Wieland Brendel

Alert button

On Adaptive Attacks to Adversarial Example Defenses

Add code
Bookmark button
Alert button
Feb 19, 2020
Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry

Figure 1 for On Adaptive Attacks to Adversarial Example Defenses
Figure 2 for On Adaptive Attacks to Adversarial Example Defenses
Viaarxiv icon

Increasing the robustness of DNNs against image corruptions by playing the Game of Noise

Add code
Bookmark button
Alert button
Jan 29, 2020
Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel

Figure 1 for Increasing the robustness of DNNs against image corruptions by playing the Game of Noise
Figure 2 for Increasing the robustness of DNNs against image corruptions by playing the Game of Noise
Figure 3 for Increasing the robustness of DNNs against image corruptions by playing the Game of Noise
Figure 4 for Increasing the robustness of DNNs against image corruptions by playing the Game of Noise
Viaarxiv icon

Learning From Brains How to Regularize Machines

Add code
Bookmark button
Alert button
Nov 11, 2019
Zhe Li, Wieland Brendel, Edgar Y. Walker, Erick Cobos, Taliah Muhammad, Jacob Reimer, Matthias Bethge, Fabian H. Sinz, Xaq Pitkow, Andreas S. Tolias

Figure 1 for Learning From Brains How to Regularize Machines
Figure 2 for Learning From Brains How to Regularize Machines
Figure 3 for Learning From Brains How to Regularize Machines
Figure 4 for Learning From Brains How to Regularize Machines
Viaarxiv icon

Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming

Add code
Bookmark button
Alert button
Jul 17, 2019
Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, Wieland Brendel

Figure 1 for Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming
Figure 2 for Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming
Figure 3 for Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming
Figure 4 for Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming
Viaarxiv icon

Accurate, reliable and fast robustness evaluation

Add code
Bookmark button
Alert button
Jul 01, 2019
Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge

Figure 1 for Accurate, reliable and fast robustness evaluation
Figure 2 for Accurate, reliable and fast robustness evaluation
Figure 3 for Accurate, reliable and fast robustness evaluation
Figure 4 for Accurate, reliable and fast robustness evaluation
Viaarxiv icon

Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet

Add code
Bookmark button
Alert button
Mar 20, 2019
Wieland Brendel, Matthias Bethge

Figure 1 for Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Figure 2 for Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Figure 3 for Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Figure 4 for Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Viaarxiv icon

On Evaluating Adversarial Robustness

Add code
Bookmark button
Alert button
Feb 20, 2019
Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, Alexey Kurakin

Viaarxiv icon

ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness

Add code
Bookmark button
Alert button
Nov 29, 2018
Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, Wieland Brendel

Figure 1 for ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Figure 2 for ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Viaarxiv icon

Towards the first adversarially robust neural network model on MNIST

Add code
Bookmark button
Alert button
Sep 20, 2018
Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel

Figure 1 for Towards the first adversarially robust neural network model on MNIST
Figure 2 for Towards the first adversarially robust neural network model on MNIST
Figure 3 for Towards the first adversarially robust neural network model on MNIST
Figure 4 for Towards the first adversarially robust neural network model on MNIST
Viaarxiv icon