Alert button
Picture for Sabera Talukder

Sabera Talukder

Alert button

Deep Neural Imputation: A Framework for Recovering Incomplete Brain Recordings

Jun 16, 2022
Sabera Talukder, Jennifer J. Sun, Matthew Leonard, Bingni W. Brunton, Yisong Yue

Figure 1 for Deep Neural Imputation: A Framework for Recovering Incomplete Brain Recordings
Figure 2 for Deep Neural Imputation: A Framework for Recovering Incomplete Brain Recordings
Figure 3 for Deep Neural Imputation: A Framework for Recovering Incomplete Brain Recordings
Figure 4 for Deep Neural Imputation: A Framework for Recovering Incomplete Brain Recordings

Neuroscientists and neuroengineers have long relied on multielectrode neural recordings to study the brain. However, in a typical experiment, many factors corrupt neural recordings from individual electrodes, including electrical noise, movement artifacts, and faulty manufacturing. Currently, common practice is to discard these corrupted recordings, reducing already limited data that is difficult to collect. To address this challenge, we propose Deep Neural Imputation (DNI), a framework to recover missing values from electrodes by learning from data collected across spatial locations, days, and participants. We explore our framework with a linear nearest-neighbor approach and two deep generative autoencoders, demonstrating DNI's flexibility. One deep autoencoder models participants individually, while the other extends this architecture to model many participants jointly. We evaluate our models across 12 human participants implanted with multielectrode intracranial electrocorticography arrays; participants had no explicit task and behaved naturally across hundreds of recording hours. We show that DNI recovers not only time series but also frequency content, and further establish DNI's practical value by recovering significant performance on a scientifically-relevant downstream neural decoding task.

Viaarxiv icon

On the Benefits of Early Fusion in Multimodal Representation Learning

Nov 14, 2020
George Barnum, Sabera Talukder, Yisong Yue

Figure 1 for On the Benefits of Early Fusion in Multimodal Representation Learning
Figure 2 for On the Benefits of Early Fusion in Multimodal Representation Learning
Figure 3 for On the Benefits of Early Fusion in Multimodal Representation Learning
Figure 4 for On the Benefits of Early Fusion in Multimodal Representation Learning

Intelligently reasoning about the world often requires integrating data from multiple modalities, as any individual modality may contain unreliable or incomplete information. Prior work in multimodal learning fuses input modalities only after significant independent processing. On the other hand, the brain performs multimodal processing almost immediately. This divide between conventional multimodal learning and neuroscience suggests that a detailed study of early multimodal fusion could improve artificial multimodal representations. To facilitate the study of early multimodal fusion, we create a convolutional LSTM network architecture that simultaneously processes both audio and visual inputs, and allows us to select the layer at which audio and visual information combines. Our results demonstrate that immediate fusion of audio and visual inputs in the initial C-LSTM layer results in higher performing networks that are more robust to the addition of white noise in both audio and visual inputs.

Viaarxiv icon

Architecture Agnostic Neural Networks

Nov 05, 2020
Sabera Talukder, Guruprasad Raghavan, Yisong Yue

Figure 1 for Architecture Agnostic Neural Networks
Figure 2 for Architecture Agnostic Neural Networks
Figure 3 for Architecture Agnostic Neural Networks
Figure 4 for Architecture Agnostic Neural Networks

In this paper, we explore an alternate method for synthesizing neural network architectures, inspired by the brain's stochastic synaptic pruning. During a person's lifetime, numerous distinct neuronal architectures are responsible for performing the same tasks. This indicates that biological neural networks are, to some degree, architecture agnostic. However, artificial networks rely on their fine-tuned weights and hand-crafted architectures for their remarkable performance. This contrast begs the question: Can we build artificial architecture agnostic neural networks? To ground this study we utilize sparse, binary neural networks that parallel the brain's circuits. Within this sparse, binary paradigm we sample many binary architectures to create families of architecture agnostic neural networks not trained via backpropagation. These high-performing network families share the same sparsity, distribution of binary weights, and succeed in both static and dynamic tasks. In summation, we create an architecture manifold search procedure to discover families or architecture agnostic neural networks.

Viaarxiv icon