Alert button
Picture for Marcel van Gerven

Marcel van Gerven

Alert button

Effective Learning with Node Perturbation in Deep Neural Networks

Oct 02, 2023
Sander Dalm, Marcel van Gerven, Nasir Ahmad

Backpropagation (BP) is the dominant and most successful method for training parameters of deep neural network models. However, BP relies on two computationally distinct phases, does not provide a satisfactory explanation of biological learning, and can be challenging to apply for training of networks with discontinuities or noisy node dynamics. By comparison, node perturbation (NP) proposes learning by the injection of noise into the network activations, and subsequent measurement of the induced loss change. NP relies on two forward (inference) passes, does not make use of network derivatives, and has been proposed as a model for learning in biological systems. However, standard NP is highly data inefficient and unstable due to its unguided, noise-based, activity search. In this work, we investigate different formulations of NP and relate it to the concept of directional derivatives as well as combining it with a decorrelating mechanism for layer-wise inputs. We find that a closer alignment with directional derivatives, and induction of decorrelation of inputs at every layer significantly enhances performance of NP learning making it competitive with BP.

Viaarxiv icon

Efficient Deep Reinforcement Learning with Predictive Processing Proximal Policy Optimization

Nov 11, 2022
Burcu Küçükoğlu, Walraaf Borkent, Bodo Rueckauer, Nasir Ahmad, Umut Güçlü, Marcel van Gerven

Figure 1 for Efficient Deep Reinforcement Learning with Predictive Processing Proximal Policy Optimization
Figure 2 for Efficient Deep Reinforcement Learning with Predictive Processing Proximal Policy Optimization
Figure 3 for Efficient Deep Reinforcement Learning with Predictive Processing Proximal Policy Optimization
Figure 4 for Efficient Deep Reinforcement Learning with Predictive Processing Proximal Policy Optimization

Advances in reinforcement learning (RL) often rely on massive compute resources and remain notoriously sample inefficient. In contrast, the human brain is able to efficiently learn effective control strategies using limited resources. This raises the question whether insights from neuroscience can be used to improve current RL methods. Predictive processing is a popular theoretical framework which maintains that the human brain is actively seeking to minimize surprise. We show that recurrent neural networks which predict their own sensory states can be leveraged to minimise surprise, yielding substantial gains in cumulative reward. Specifically, we present the Predictive Processing Proximal Policy Optimization (P4O) agent; an actor-critic reinforcement learning agent that applies predictive processing to a recurrent variant of the PPO algorithm by integrating a world model in its hidden state. P4O significantly outperforms a baseline recurrent variant of the PPO algorithm on multiple Atari games using a single GPU. It also outperforms other state-of-the-art agents given the same wall-clock time and exceeds human gamer performance on multiple games including Seaquest, which is a particularly challenging environment in the Atari domain. Altogether, our work underscores how insights from the field of neuroscience may support the development of more capable and efficient artificial agents.

* 17 pages, 6 figures 
Viaarxiv icon

Learning Policies for Continuous Control via Transition Models

Sep 16, 2022
Justus Huebotter, Serge Thill, Marcel van Gerven, Pablo Lanillos

Figure 1 for Learning Policies for Continuous Control via Transition Models
Figure 2 for Learning Policies for Continuous Control via Transition Models
Figure 3 for Learning Policies for Continuous Control via Transition Models
Figure 4 for Learning Policies for Continuous Control via Transition Models

It is doubtful that animals have perfect inverse models of their limbs (e.g., what muscle contraction must be applied to every joint to reach a particular location in space). However, in robot control, moving an arm's end-effector to a target position or along a target trajectory requires accurate forward and inverse models. Here we show that by learning the transition (forward) model from interaction, we can use it to drive the learning of an amortized policy. Hence, we revisit policy optimization in relation to the deep active inference framework and describe a modular neural network architecture that simultaneously learns the system dynamics from prediction errors and the stochastic policy that generates suitable continuous control commands to reach a desired reference position. We evaluated the model by comparing it against the baseline of a linear quadratic regulator, and conclude with additional steps to take toward human-like motor control.

Viaarxiv icon

Constrained Parameter Inference as a Principle for Learning

Apr 01, 2022
Nasir Ahmad, Ellen Schrader, Marcel van Gerven

Figure 1 for Constrained Parameter Inference as a Principle for Learning
Figure 2 for Constrained Parameter Inference as a Principle for Learning
Figure 3 for Constrained Parameter Inference as a Principle for Learning

Learning in biological and artificial neural networks is often framed as a problem in which targeted error signals guide parameter updating for more optimal network behaviour. Backpropagation of error (BP) is an example of such an approach and has proven to be a highly successful application of stochastic gradient descent to deep neural networks. However, BP relies on the global transmission of gradient information and has therefore been criticised for its biological implausibility. We propose constrained parameter inference (COPI) as a new principle for learning. COPI allows for the estimation of network parameters under the constraints of decorrelated neural inputs and top-down perturbations of neural states. We show that COPI not only is more biologically plausible but also provides distinct advantages for fast learning, compared with the backpropagation algorithm.

Viaarxiv icon

Neuroscience-inspired perception-action in robotics: applying active inference for state estimation, control and self-perception

May 10, 2021
Pablo Lanillos, Marcel van Gerven

Figure 1 for Neuroscience-inspired perception-action in robotics: applying active inference for state estimation, control and self-perception
Figure 2 for Neuroscience-inspired perception-action in robotics: applying active inference for state estimation, control and self-perception
Figure 3 for Neuroscience-inspired perception-action in robotics: applying active inference for state estimation, control and self-perception
Figure 4 for Neuroscience-inspired perception-action in robotics: applying active inference for state estimation, control and self-perception

Unlike robots, humans learn, adapt and perceive their bodies by interacting with the world. Discovering how the brain represents the body and generates actions is of major importance for robotics and artificial intelligence. Here we discuss how neuroscience findings open up opportunities to improve current estimation and control algorithms in robotics. In particular, how active inference, a mathematical formulation of how the brain resists a natural tendency to disorder, provides a unified recipe to potentially solve some of the major challenges in robotics, such as adaptation, robustness, flexibility, generalization and safe interaction. This paper summarizes some experiments and lessons learned from developing such a computational model on real embodied platforms, i.e., humanoid and industrial robots. Finally, we showcase the limitations and challenges that we are still facing to give robots human-like perception

* Accepted at ICLR 2021 Brain2AI workshop 
Viaarxiv icon

Scaling up learning with GAIT-prop

Feb 23, 2021
Sander Dalm, Nasir Ahmad, Luca Ambrogioni, Marcel van Gerven

Figure 1 for Scaling up learning with GAIT-prop
Figure 2 for Scaling up learning with GAIT-prop

Backpropagation of error (BP) is a widely used and highly successful learning algorithm. However, its reliance on non-local information in propagating error gradients makes it seem an unlikely candidate for learning in the brain. In the last decade, a number of investigations have been carried out focused upon determining whether alternative more biologically plausible computations can be used to approximate BP. This work builds on such a local learning algorithm - Gradient Adjusted Incremental Target Propagation (GAIT-prop) - which has recently been shown to approximate BP in a manner which appears biologically plausible. This method constructs local, layer-wise weight update targets in order to enable plausible credit assignment. However, in deep networks, the local weight updates computed by GAIT-prop can deviate from BP for a number of reasons. Here, we provide and test methods to overcome such sources of error. In particular, we adaptively rescale the locally-computed errors and show that this significantly increases the performance and stability of the GAIT-prop algorithm when applied to the CIFAR-10 dataset.

Viaarxiv icon

Automatic variational inference with cascading flows

Feb 09, 2021
Luca Ambrogioni, Gianluigi Silvestri, Marcel van Gerven

Figure 1 for Automatic variational inference with cascading flows
Figure 2 for Automatic variational inference with cascading flows
Figure 3 for Automatic variational inference with cascading flows
Figure 4 for Automatic variational inference with cascading flows

The automation of probabilistic reasoning is one of the primary aims of machine learning. Recently, the confluence of variational inference and deep learning has led to powerful and flexible automatic inference methods that can be trained by stochastic gradient descent. In particular, normalizing flows are highly parameterized deep models that can fit arbitrarily complex posterior densities. However, normalizing flows struggle in highly structured probabilistic programs as they need to relearn the forward-pass of the program. Automatic structured variational inference (ASVI) remedies this problem by constructing variational programs that embed the forward-pass. Here, we combine the flexibility of normalizing flows and the prior-embedding property of ASVI in a new family of variational programs, which we named cascading flows. A cascading flows program interposes a newly designed highway flow architecture in between the conditional distributions of the prior program such as to steer it toward the observed data. These programs can be constructed automatically from an input probabilistic program and can also be amortized automatically. We evaluate the performance of the new variational programs in a series of structured inference problems. We find that cascading flows have much higher performance than both normalizing flows and ASVI in a large set of structured inference problems.

Viaarxiv icon

A deep active inference model of the rubber-hand illusion

Aug 17, 2020
Thomas Rood, Marcel van Gerven, Pablo Lanillos

Figure 1 for A deep active inference model of the rubber-hand illusion
Figure 2 for A deep active inference model of the rubber-hand illusion
Figure 3 for A deep active inference model of the rubber-hand illusion

Understanding how perception and action deal with sensorimotor conflicts, such as the rubber-hand illusion (RHI), is essential to understand how the body adapts to uncertain situations. Recent results in humans have shown that the RHI not only produces a change in the perceived arm location, but also causes involuntary forces. Here, we describe a deep active inference agent in a virtual environment, which we subjected to the RHI, that is able to account for these results. We show that our model, which deals with visual high-dimensional inputs, produces similar perceptual and force patterns to those found in humans.

* 8 pages, 3 figures, Accepted in 1st International Workshop on Active Inference, in Conjunction with European Conference of Machine Learning 2020 
Viaarxiv icon

Explainable Deep Learning: A Field Guide for the Uninitiated

Apr 30, 2020
Ning Xie, Gabrielle Ras, Marcel van Gerven, Derek Doran

Figure 1 for Explainable Deep Learning: A Field Guide for the Uninitiated
Figure 2 for Explainable Deep Learning: A Field Guide for the Uninitiated
Figure 3 for Explainable Deep Learning: A Field Guide for the Uninitiated
Figure 4 for Explainable Deep Learning: A Field Guide for the Uninitiated

Deep neural network (DNN) is an indispensable machine learning tool for achieving human-level performance on many learning tasks. Yet, due to its black-box nature, it is inherently difficult to understand which aspects of the input data drive the decisions of the network. There are various real-world scenarios in which humans need to make actionable decisions based on the output DNNs. Such decision support systems can be found in critical domains, such as legislation, law enforcement, etc. It is important that the humans making high-level decisions can be sure that the DNN decisions are driven by combinations of data features that are appropriate in the context of the deployment of the decision support system and that the decisions made are legally or ethically defensible. Due to the incredible pace at which DNN technology is being developed, the development of new methods and studies on explaining the decision-making process of DNNs has blossomed into an active research field. A practitioner beginning to study explainable deep learning may be intimidated by the plethora of orthogonal directions the field is taking. This complexity is further exacerbated by the general confusion that exists in defining what it means to be able to explain the actions of a deep learning system and to evaluate a system's "ability to explain". To alleviate this problem, this article offers a "field guide" to deep learning explainability for those uninitiated in the field. The field guide: i) Discusses the traits of a deep learning system that researchers enhance in explainability research, ii) places explainability in the context of other related deep learning research areas, and iii) introduces three simple dimensions defining the space of foundational methods that contribute to explainable deep learning. The guide is designed as an easy-to-digest starting point for those just embarking in the field.

* Survey paper on Explainable Deep Learning, 54 pages including references 
Viaarxiv icon

Virtual staining for mitosis detection in Breast Histopathology

Mar 17, 2020
Caner Mercan, Germonda Reijnen-Mooij, David Tellez Martin, Johannes Lotz, Nick Weiss, Marcel van Gerven, Francesco Ciompi

Figure 1 for Virtual staining for mitosis detection in Breast Histopathology
Figure 2 for Virtual staining for mitosis detection in Breast Histopathology
Figure 3 for Virtual staining for mitosis detection in Breast Histopathology
Figure 4 for Virtual staining for mitosis detection in Breast Histopathology

We propose a virtual staining methodology based on Generative Adversarial Networks to map histopathology images of breast cancer tissue from H&E stain to PHH3 and vice versa. We use the resulting synthetic images to build Convolutional Neural Networks (CNN) for automatic detection of mitotic figures, a strong prognostic biomarker used in routine breast cancer diagnosis and grading. We propose several scenarios, in which CNN trained with synthetically generated histopathology images perform on par with or even better than the same baseline model trained with real images. We discuss the potential of this application to scale the number of training samples without the need for manual annotations.

* 5 pages, 4 figures. Accepted for publication at the IEEE International Symposium on Biomedical Imaging (ISBI), 2020 
Viaarxiv icon