Alert button
Picture for Felix A. Wichmann

Felix A. Wichmann

Alert button

Immediate generalisation in humans but a generalisation lag in deep neural networks -- evidence for representational divergence?

Add code
Bookmark button
Alert button
Feb 19, 2024
Lukas S. Huber, Fred W. Mast, Felix A. Wichmann

Viaarxiv icon

Immediate generalisation in humans but a generalisation lag in deep neural networks$\unicode{x2014}$evidence for representational divergence?

Add code
Bookmark button
Alert button
Feb 14, 2024
Lukas S. Huber, Fred W. Mast, Felix A. Wichmann

Viaarxiv icon

Neither hype nor gloom do DNNs justice

Add code
Bookmark button
Alert button
Dec 08, 2023
Felix A. Wichmann, Simon Kornblith, Robert Geirhos

Viaarxiv icon

Are Deep Neural Networks Adequate Behavioural Models of Human Visual Perception?

Add code
Bookmark button
Alert button
May 26, 2023
Felix A. Wichmann, Robert Geirhos

Figure 1 for Are Deep Neural Networks Adequate Behavioural Models of Human Visual Perception?
Figure 2 for Are Deep Neural Networks Adequate Behavioural Models of Human Visual Perception?
Viaarxiv icon

The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks

Add code
Bookmark button
Alert button
May 20, 2022
Lukas S. Huber, Robert Geirhos, Felix A. Wichmann

Figure 1 for The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks
Figure 2 for The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks
Figure 3 for The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks
Figure 4 for The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks
Viaarxiv icon

Trivial or impossible -- dichotomous data difficulty masks model differences (on ImageNet and beyond)

Add code
Bookmark button
Alert button
Oct 12, 2021
Kristof Meding, Luca M. Schulze Buschoff, Robert Geirhos, Felix A. Wichmann

Figure 1 for Trivial or impossible -- dichotomous data difficulty masks model differences (on ImageNet and beyond)
Figure 2 for Trivial or impossible -- dichotomous data difficulty masks model differences (on ImageNet and beyond)
Figure 3 for Trivial or impossible -- dichotomous data difficulty masks model differences (on ImageNet and beyond)
Figure 4 for Trivial or impossible -- dichotomous data difficulty masks model differences (on ImageNet and beyond)
Viaarxiv icon

Partial success in closing the gap between human and machine vision

Add code
Bookmark button
Alert button
Jun 14, 2021
Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel

Figure 1 for Partial success in closing the gap between human and machine vision
Figure 2 for Partial success in closing the gap between human and machine vision
Figure 3 for Partial success in closing the gap between human and machine vision
Figure 4 for Partial success in closing the gap between human and machine vision
Viaarxiv icon

Deep Neural Models for color discrimination and color constancy

Add code
Bookmark button
Alert button
Dec 28, 2020
Alban Flachot, Arash Akbarinia, Heiko H. Schütt, Roland W. Fleming, Felix A. Wichmann, Karl R. Gegenfurtner

Figure 1 for Deep Neural Models for color discrimination and color constancy
Figure 2 for Deep Neural Models for color discrimination and color constancy
Figure 3 for Deep Neural Models for color discrimination and color constancy
Figure 4 for Deep Neural Models for color discrimination and color constancy
Viaarxiv icon

On the surprising similarities between supervised and self-supervised models

Add code
Bookmark button
Alert button
Oct 16, 2020
Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Matthias Bethge, Felix A. Wichmann, Wieland Brendel

Figure 1 for On the surprising similarities between supervised and self-supervised models
Figure 2 for On the surprising similarities between supervised and self-supervised models
Figure 3 for On the surprising similarities between supervised and self-supervised models
Figure 4 for On the surprising similarities between supervised and self-supervised models
Viaarxiv icon

Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency

Add code
Bookmark button
Alert button
Jun 30, 2020
Robert Geirhos, Kristof Meding, Felix A. Wichmann

Figure 1 for Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency
Figure 2 for Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency
Figure 3 for Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency
Figure 4 for Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency
Viaarxiv icon