Alert button
Picture for Nicola Strisciuglio

Nicola Strisciuglio

Alert button

DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning

Aug 12, 2023
Shunxin Wang, Christoph Brune, Raymond Veldhuis, Nicola Strisciuglio

Figure 1 for DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning
Figure 2 for DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning
Figure 3 for DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning
Figure 4 for DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning

Neural networks are prone to learn easy solutions from superficial statistics in the data, namely shortcut learning, which impairs generalization and robustness of models. We propose a data augmentation strategy, named DFM-X, that leverages knowledge about frequency shortcuts, encoded in Dominant Frequencies Maps computed for image classification models. We randomly select X% training images of certain classes for augmentation, and process them by retaining the frequencies included in the DFMs of other classes. This strategy compels the models to leverage a broader range of frequencies for classification, rather than relying on specific frequency sets. Thus, the models learn more deep and task-related semantics compared to their counterpart trained with standard setups. Unlike other commonly used augmentation techniques which focus on increasing the visual variations of training data, our method targets exploiting the original data efficiently, by distilling prior knowledge about destructive learning behavior of models from data. Our experimental results demonstrate that DFM-X improves robustness against common corruptions and adversarial attacks. It can be seamlessly integrated with other augmentation techniques to further enhance the robustness of models.

* Accepted at ICCVW2023 
Viaarxiv icon

Defocus Blur Synthesis and Deblurring via Interpolation and Extrapolation in Latent Space

Jul 28, 2023
Ioana Mazilu, Shunxin Wang, Sven Dummer, Raymond Veldhuis, Christoph Brune, Nicola Strisciuglio

Figure 1 for Defocus Blur Synthesis and Deblurring via Interpolation and Extrapolation in Latent Space
Figure 2 for Defocus Blur Synthesis and Deblurring via Interpolation and Extrapolation in Latent Space
Figure 3 for Defocus Blur Synthesis and Deblurring via Interpolation and Extrapolation in Latent Space
Figure 4 for Defocus Blur Synthesis and Deblurring via Interpolation and Extrapolation in Latent Space

Though modern microscopes have an autofocusing system to ensure optimal focus, out-of-focus images can still occur when cells within the medium are not all in the same focal plane, affecting the image quality for medical diagnosis and analysis of diseases. We propose a method that can deblur images as well as synthesize defocus blur. We train autoencoders with implicit and explicit regularization techniques to enforce linearity relations among the representations of different blur levels in the latent space. This allows for the exploration of different blur levels of an object by linearly interpolating/extrapolating the latent representations of images taken at different focal planes. Compared to existing works, we use a simple architecture to synthesize images with flexible blur levels, leveraging the linear latent space. Our regularized autoencoders can effectively mimic blur and deblur, increasing data variety as a data augmentation technique and improving the quality of microscopic images, which would be beneficial for further processing and analysis.

* Accepted at CAIP2023 
Viaarxiv icon

What do neural networks learn in image classification? A frequency shortcut perspective

Jul 19, 2023
Shunxin Wang, Raymond Veldhuis, Christoph Brune, Nicola Strisciuglio

Figure 1 for What do neural networks learn in image classification? A frequency shortcut perspective
Figure 2 for What do neural networks learn in image classification? A frequency shortcut perspective
Figure 3 for What do neural networks learn in image classification? A frequency shortcut perspective
Figure 4 for What do neural networks learn in image classification? A frequency shortcut perspective

Frequency analysis is useful for understanding the mechanisms of representation learning in neural networks (NNs). Most research in this area focuses on the learning dynamics of NNs for regression tasks, while little for classification. This study empirically investigates the latter and expands the understanding of frequency shortcuts. First, we perform experiments on synthetic datasets, designed to have a bias in different frequency bands. Our results demonstrate that NNs tend to find simple solutions for classification, and what they learn first during training depends on the most distinctive frequency characteristics, which can be either low- or high-frequencies. Second, we confirm this phenomenon on natural images. We propose a metric to measure class-wise frequency characteristics and a method to identify frequency shortcuts. The results show that frequency shortcuts can be texture-based or shape-based, depending on what best simplifies the objective. Third, we validate the transferability of frequency shortcuts on out-of-distribution (OOD) test sets. Our results suggest that frequency shortcuts can be transferred across datasets and cannot be fully avoided by larger model capacity and data augmentation. We recommend that future research should focus on effective training schemes mitigating frequency shortcut learning.

* Accepted at ICCV2023 
Viaarxiv icon

RSA-INR: Riemannian Shape Autoencoding via 4D Implicit Neural Representations

May 22, 2023
Sven Dummer, Nicola Strisciuglio, Christoph Brune

Figure 1 for RSA-INR: Riemannian Shape Autoencoding via 4D Implicit Neural Representations
Figure 2 for RSA-INR: Riemannian Shape Autoencoding via 4D Implicit Neural Representations
Figure 3 for RSA-INR: Riemannian Shape Autoencoding via 4D Implicit Neural Representations
Figure 4 for RSA-INR: Riemannian Shape Autoencoding via 4D Implicit Neural Representations

Shape encoding and shape analysis are valuable tools for comparing shapes and for dimensionality reduction. A specific framework for shape analysis is the Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework, which is capable of shape matching and dimensionality reduction. Researchers have recently introduced neural networks into this framework. However, these works can not match more than two objects simultaneously or have suboptimal performance in shape variability modeling. The latter limitation occurs as the works do not use state-of-the-art shape encoding methods. Moreover, the literature does not discuss the connection between the LDDMM Riemannian distance and the Riemannian geometry for deep learning literature. Our work aims to bridge this gap by demonstrating how LDDMM can integrate Riemannian geometry into deep learning. Furthermore, we discuss how deep learning solves and generalizes shape matching and dimensionality reduction formulations of LDDMM. We achieve both goals by designing a novel implicit encoder for shapes. This model extends a neural network-based algorithm for LDDMM-based pairwise registration, results in a nonlinear manifold PCA, and adds a Riemannian geometry aspect to deep learning models for shape variability modeling. Additionally, we demonstrate that the Riemannian geometry component improves the reconstruction procedure of the implicit encoder in terms of reconstruction quality and stability to noise. We hope our discussion paves the way to more research into how Riemannian geometry, shape/image analysis, and deep learning can be combined.

* 26 pages, 20 figures (including subfigures) 
Viaarxiv icon

The Robustness of Computer Vision Models against Common Corruptions: a Survey

May 10, 2023
Shunxin Wang, Raymond Veldhuis, Nicola Strisciuglio

Figure 1 for The Robustness of Computer Vision Models against Common Corruptions: a Survey
Figure 2 for The Robustness of Computer Vision Models against Common Corruptions: a Survey
Figure 3 for The Robustness of Computer Vision Models against Common Corruptions: a Survey
Figure 4 for The Robustness of Computer Vision Models against Common Corruptions: a Survey

The performance of computer vision models is susceptible to unexpected changes in input images when deployed in real scenarios. These changes are referred to as common corruptions. While they can hinder the applicability of computer vision models in real-world scenarios, they are not always considered as a testbed for model generalization and robustness. In this survey, we present a comprehensive and systematic overview of methods that improve corruption robustness of computer vision models. Unlike existing surveys that focus on adversarial attacks and label noise, we cover extensively the study of robustness to common corruptions that can occur when deploying computer vision models to work in practical applications. We describe different types of image corruption and provide the definition of corruption robustness. We then introduce relevant evaluation metrics and benchmark datasets. We categorize methods into four groups. We also cover indirect methods that show improvements in generalization and may improve corruption robustness as a byproduct. We report benchmark results collected from the literature and find that they are not evaluated in a unified manner, making it difficult to compare and analyze. We thus built a unified benchmark framework to obtain directly comparable results on benchmark datasets. Furthermore, we evaluate relevant backbone networks pre-trained on ImageNet using our framework, providing an overview of the base corruption robustness of existing models to help choose appropriate backbones for computer vision tasks. We identify that developing methods to handle a wide range of corruptions and efficiently learn with limited data and computational resources is crucial for future development. Additionally, we highlight the need for further investigation into the relationship among corruption robustness, OOD generalization, and shortcut learning.

Viaarxiv icon

Data-efficient Large Scale Place Recognition with Graded Similarity Supervision

Mar 25, 2023
Maria Leyva-Vallina, Nicola Strisciuglio, Nicolai Petkov

Figure 1 for Data-efficient Large Scale Place Recognition with Graded Similarity Supervision
Figure 2 for Data-efficient Large Scale Place Recognition with Graded Similarity Supervision
Figure 3 for Data-efficient Large Scale Place Recognition with Graded Similarity Supervision
Figure 4 for Data-efficient Large Scale Place Recognition with Graded Similarity Supervision

Visual place recognition (VPR) is a fundamental task of computer vision for visual localization. Existing methods are trained using image pairs that either depict the same place or not. Such a binary indication does not consider continuous relations of similarity between images of the same place taken from different positions, determined by the continuous nature of camera pose. The binary similarity induces a noisy supervision signal into the training of VPR methods, which stall in local minima and require expensive hard mining algorithms to guarantee convergence. Motivated by the fact that two images of the same place only partially share visual cues due to camera pose differences, we deploy an automatic re-annotation strategy to re-label VPR datasets. We compute graded similarity labels for image pairs based on available localization metadata. Furthermore, we propose a new Generalized Contrastive Loss (GCL) that uses graded similarity labels for training contrastive networks. We demonstrate that the use of the new labels and GCL allow to dispense from hard-pair mining, and to train image descriptors that perform better in VPR by nearest neighbor search, obtaining superior or comparable results than methods that require expensive hard-pair mining and re-ranking techniques. Code and models available at: https://github.com/marialeyvallina/generalized_contrastive_loss

* Accepted at CVPR 2023 
Viaarxiv icon

Survey on Automated Short Answer Grading with Deep Learning: from Word Embeddings to Transformers

Mar 11, 2022
Stefan Haller, Adina Aldea, Christin Seifert, Nicola Strisciuglio

Figure 1 for Survey on Automated Short Answer Grading with Deep Learning: from Word Embeddings to Transformers
Figure 2 for Survey on Automated Short Answer Grading with Deep Learning: from Word Embeddings to Transformers
Figure 3 for Survey on Automated Short Answer Grading with Deep Learning: from Word Embeddings to Transformers
Figure 4 for Survey on Automated Short Answer Grading with Deep Learning: from Word Embeddings to Transformers

Automated short answer grading (ASAG) has gained attention in education as a means to scale educational tasks to the growing number of students. Recent progress in Natural Language Processing and Machine Learning has largely influenced the field of ASAG, of which we survey the recent research advancements. We complement previous surveys by providing a comprehensive analysis of recently published methods that deploy deep learning approaches. In particular, we focus our analysis on the transition from hand engineered features to representation learning approaches, which learn representative features for the task at hand automatically from large corpora of data. We structure our analysis of deep learning methods along three categories: word embeddings, sequential models, and attention-based methods. Deep learning impacted ASAG differently than other fields of NLP, as we noticed that the learned representations alone do not contribute to achieve the best results, but they rather show to work in a complementary way with hand-engineered features. The best performance are indeed achieved by methods that combine the carefully hand-engineered features with the power of the semantic descriptions provided by the latest models, like transformers architectures. We identify challenges and provide an outlook on research direction that can be addressed in the future

* Under review 
Viaarxiv icon

Generalized Contrastive Optimization of Siamese Networks for Place Recognition

Mar 11, 2021
María Leyva-Vallina, Nicola Strisciuglio, Nicolai Petkov

Figure 1 for Generalized Contrastive Optimization of Siamese Networks for Place Recognition
Figure 2 for Generalized Contrastive Optimization of Siamese Networks for Place Recognition
Figure 3 for Generalized Contrastive Optimization of Siamese Networks for Place Recognition
Figure 4 for Generalized Contrastive Optimization of Siamese Networks for Place Recognition

Visual place recognition is a challenging task in computer vision and a key component of camera-based localization and navigation systems. Recently, Convolutional Neural Networks (CNNs) achieved high results and good generalization capabilities. They are usually trained using pairs or triplets of images labeled as either similar or dissimilar, in a binary fashion. In practice, the similarity between two images is not binary, but rather continuous. Furthermore, training these CNNs is computationally complex and involves costly pair and triplet mining strategies. We propose a Generalized Contrastive loss (GCL) function that relies on image similarity as a continuous measure, and use it to train a siamese CNN. Furthermore, we propose three techniques for automatic annotation of image pairs with labels indicating their degree of similarity, and deploy them to re-annotate the MSLS, TB-Places, and 7Scenes datasets. We demonstrate that siamese CNNs trained using the GCL function and the improved annotations consistently outperform their binary counterparts. Our models trained on MSLS outperform the state-of-the-art methods, including NetVLAD, and generalize well on the Pittsburgh, TokyoTM and Tokyo 24/7 datasets. Furthermore, training a siamese network using the GCL function does not require complex pair mining. We release the source code at https://github.com/marialeyvallina/generalized_contrastive_loss.

* Under review 
Viaarxiv icon

Brain-inspired algorithms for processing of visual data

Mar 02, 2021
Nicola Strisciuglio

Figure 1 for Brain-inspired algorithms for processing of visual data
Figure 2 for Brain-inspired algorithms for processing of visual data
Figure 3 for Brain-inspired algorithms for processing of visual data
Figure 4 for Brain-inspired algorithms for processing of visual data

The study of the visual system of the brain has attracted the attention and interest of many neuro-scientists, that derived computational models of some types of neuron that compose it. These findings inspired researchers in image processing and computer vision to deploy such models to solve problems of visual data processing. In this paper, we review approaches for image processing and computer vision, the design of which is based on neuro-scientific findings about the functions of some neurons in the visual cortex. Furthermore, we analyze the connection between the hierarchical organization of the visual system of the brain and the structure of Convolutional Networks (ConvNets). We pay particular attention to the mechanisms of inhibition of the responses of some neurons, which provide the visual system with improved stability to changing input stimuli, and discuss their implementation in image processing operators and in ConvNets.

Viaarxiv icon

MTStereo 2.0: improved accuracy of stereo depth estimation withMax-trees

Jun 27, 2020
Rafael Brandt, Nicola Strisciuglio, Nicolai Petkov

Figure 1 for MTStereo 2.0: improved accuracy of stereo depth estimation withMax-trees
Figure 2 for MTStereo 2.0: improved accuracy of stereo depth estimation withMax-trees
Figure 3 for MTStereo 2.0: improved accuracy of stereo depth estimation withMax-trees
Figure 4 for MTStereo 2.0: improved accuracy of stereo depth estimation withMax-trees

Efficient yet accurate extraction of depth from stereo image pairs is required by systems with low power resources, such as robotics and embedded systems. State-of-the-art stereo matching methods based on convolutional neural networks require intensive computations on GPUs and are difficult to deploy on embedded systems. In this paper, we propose a stereo matching method, called MTStereo 2.0, for limited-resource systems that require efficient and accurate depth estimation. It is based on a Max-tree hierarchical representation of image pairs, which we use to identify matching regions along image scan-lines. The method includes a cost function that considers similarity of region contextual information based on the Max-trees and a disparity border preserving cost aggregation approach. MTStereo 2.0 improves on its predecessor MTStereo 1.0 as it a) deploys a more robust cost function, b) performs more thorough detection of incorrect matches, c) computes disparity maps with pixel-level rather than node-level precision. MTStereo provides accurate sparse and semi-dense depth estimation and does not require intensive GPU computations like methods based on CNNs. Thus it can run on embedded and robotics devices with low-power requirements. We tested the proposed approach on several benchmark data sets, namely KITTI 2015, Driving, FlyingThings3D, Middlebury 2014, Monkaa and the TrimBot2020 garden data sets, and achieved competitive accuracy and efficiency. The code is available at https://github.com/rbrandt1/MaxTreeS.

Viaarxiv icon