Alert button
Picture for Alexander Hadjiivanov

Alexander Hadjiivanov

Alert button

On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing

Aug 01, 2023
Loïc J. Azzalini, Emmanuel Blazquez, Alexander Hadjiivanov, Gabriele Meoni, Dario Izzo

Figure 1 for On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing
Figure 2 for On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing
Figure 3 for On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing
Figure 4 for On the Generation of a Synthetic Event-Based Vision Dataset for Navigation and Landing

An event-based camera outputs an event whenever a change in scene brightness of a preset magnitude is detected at a particular pixel location in the sensor plane. The resulting sparse and asynchronous output coupled with the high dynamic range and temporal resolution of this novel camera motivate the study of event-based cameras for navigation and landing applications. However, the lack of real-world and synthetic datasets to support this line of research has limited its consideration for onboard use. This paper presents a methodology and a software pipeline for generating event-based vision datasets from optimal landing trajectories during the approach of a target body. We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility at different viewpoints along a set of optimal descent trajectories obtained by varying the boundary conditions. The generated image sequences are then converted into event streams by means of an event-based camera emulator. We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories, complete with event streams and motion field ground truth data. We anticipate that novel event-based vision datasets can be generated using this pipeline to support various spacecraft pose reconstruction problems given events as input, and we hope that the proposed methodology would attract the attention of researchers working at the intersection of neuromorphic vision and guidance navigation and control.

Viaarxiv icon

Neuromorphic Computing and Sensing in Space

Dec 17, 2022
Dario Izzo, Alexander Hadjiivanov, Dominik Dold, Gabriele Meoni, Emmanuel Blazquez

Figure 1 for Neuromorphic Computing and Sensing in Space
Figure 2 for Neuromorphic Computing and Sensing in Space
Figure 3 for Neuromorphic Computing and Sensing in Space
Figure 4 for Neuromorphic Computing and Sensing in Space

The term ``neuromorphic'' refers to systems that are closely resembling the architecture and/or the dynamics of biological neural networks. Typical examples are novel computer chips designed to mimic the architecture of a biological brain, or sensors that get inspiration from, e.g., the visual or olfactory systems in insects and mammals to acquire information about the environment. This approach is not without ambition as it promises to enable engineered devices able to reproduce the level of performance observed in biological organisms -- the main immediate advantage being the efficient use of scarce resources, which translates into low power requirements. The emphasis on low power and energy efficiency of neuromorphic devices is a perfect match for space applications. Spacecraft -- especially miniaturized ones -- have strict energy constraints as they need to operate in an environment which is scarce with resources and extremely hostile. In this work we present an overview of early attempts made to study a neuromorphic approach in a space context at the European Space Agency's (ESA) Advanced Concepts Team (ACT).

Viaarxiv icon

Continuous Learning and Adaptation with Membrane Potential and Activation Threshold Homeostasis

May 08, 2021
Alexander Hadjiivanov

Figure 1 for Continuous Learning and Adaptation with Membrane Potential and Activation Threshold Homeostasis
Figure 2 for Continuous Learning and Adaptation with Membrane Potential and Activation Threshold Homeostasis
Figure 3 for Continuous Learning and Adaptation with Membrane Potential and Activation Threshold Homeostasis
Figure 4 for Continuous Learning and Adaptation with Membrane Potential and Activation Threshold Homeostasis

Most classical (non-spiking) neural network models disregard internal neuron dynamics and treat neurons as simple input integrators. However, biological neurons have an internal state governed by complex dynamics that plays a crucial role in learning, adaptation and the overall network activity and behaviour. This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model, which combines several biologically inspired mechanisms to efficiently simulate internal neuron dynamics with a single parameter analogous to the membrane time constant in biological neurons. The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with fluctuating input. One consequence of the MPATH model is that it imbues neurons with a sense of time without recurrent connections, paving the way for modelling processes that depend on temporal aspects of neuron activity. Experiments demonstrate the model's ability to adapt to and continually learn from its input.

* 19 pages 
Viaarxiv icon

Epigenetic evolution of deep convolutional models

Apr 12, 2021
Alexander Hadjiivanov, Alan Blair

Figure 1 for Epigenetic evolution of deep convolutional models
Figure 2 for Epigenetic evolution of deep convolutional models
Figure 3 for Epigenetic evolution of deep convolutional models
Figure 4 for Epigenetic evolution of deep convolutional models

In this study, we build upon a previously proposed neuroevolution framework to evolve deep convolutional models. Specifically, the genome encoding and the crossover operator are extended to make them applicable to layered networks. We also propose a convolutional layer layout which allows kernels of different shapes and sizes to coexist within the same layer, and present an argument as to why this may be beneficial. The proposed layout enables the size and shape of individual kernels within a convolutional layer to be evolved with a corresponding new mutation operator. The proposed framework employs a hybrid optimisation strategy involving structural changes through epigenetic evolution and weight update through backpropagation in a population-based setting. Experiments on several image classification benchmarks demonstrate that the crossover operator is sufficiently robust to produce increasingly performant offspring even when the parents are trained on only a small random subset of the training dataset in each epoch, thus providing direct confirmation that learned features and behaviour can be successfully transferred from parent networks to offspring in the next generation.

* 2019 IEEE Congress on Evolutionary Computation (CEC)  
* 8 pages 
Viaarxiv icon

Adaptive conversion of real-valued input into spike trains

Apr 12, 2021
Alexander Hadjiivanov

Figure 1 for Adaptive conversion of real-valued input into spike trains
Figure 2 for Adaptive conversion of real-valued input into spike trains
Figure 3 for Adaptive conversion of real-valued input into spike trains
Figure 4 for Adaptive conversion of real-valued input into spike trains

This paper presents a biologically plausible method for converting real-valued input into spike trains for processing with spiking neural networks. The proposed method mimics the adaptive behaviour of retinal ganglion cells and allows input neurons to adapt their response to changes in the statistics of the input. Thus, rather than passively receiving values and forwarding them to the hidden and output layers, the input layer acts as a self-regulating filter which emphasises deviations from the average while allowing the input neurons to become effectively desensitised to the average itself. Another merit of the proposed method is that it requires only one input neuron per variable, rather than an entire population of neurons as in the case of the commonly used conversion method based on Gaussian receptive fields. In addition, since the statistics of the input emerge naturally over time, it becomes unnecessary to pre-process the data before feeding it to the network. This enables spiking neural networks to process raw, non-normalised streaming data. A proof-of-concept experiment is performed to demonstrate that the proposed method operates as expected.

* 2016 International Joint Conference on Neural Networks  
* 8 pages 
Viaarxiv icon

Complexity-based speciation and genotype representation for neuroevolution

Oct 11, 2020
Alexander Hadjiivanov, Alan Blair

Figure 1 for Complexity-based speciation and genotype representation for neuroevolution
Figure 2 for Complexity-based speciation and genotype representation for neuroevolution
Figure 3 for Complexity-based speciation and genotype representation for neuroevolution
Figure 4 for Complexity-based speciation and genotype representation for neuroevolution

This paper introduces a speciation principle for neuroevolution where evolving networks are grouped into species based on the number of hidden neurons, which is indicative of the complexity of the search space. This speciation principle is indivisibly coupled with a novel genotype representation which is characterised by zero genome redundancy, high resilience to bloat, explicit marking of recurrent connections, as well as an efficient and reproducible stack-based evaluation procedure for networks with arbitrary topology. Furthermore, the proposed speciation principle is employed in several techniques designed to promote and preserve diversity within species and in the ecosystem as a whole. The competitive performance of the proposed framework, named Cortex, is demonstrated through experiments. A highly customisable software platform which implements the concepts proposed in this study is also introduced in the hope that it will serve as a useful and reliable tool for experimentation in the field of neuroevolution.

* 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, 2016, pp. 3092-3101  
Viaarxiv icon