Alert button
Picture for Paul Sajda

Paul Sajda

Alert button

Circular Clustering with Polar Coordinate Reconstruction

Sep 15, 2023
Xiaoxiao Sun, Paul Sajda

There is a growing interest in characterizing circular data found in biological systems. Such data are wide ranging and varied, from signal phase in neural recordings to nucleotide sequences in round genomes. Traditional clustering algorithms are often inadequate due to their limited ability to distinguish differences in the periodic component. Current clustering schemes that work in a polar coordinate system have limitations, such as being only angle-focused or lacking generality. To overcome these limitations, we propose a new analysis framework that utilizes projections onto a cylindrical coordinate system to better represent objects in a polar coordinate system. Using the mathematical properties of circular data, we show our approach always finds the correct clustering result within the reconstructed dataset, given sufficient periodic repetitions of the data. Our approach is generally applicable and adaptable and can be incorporated into most state-of-the-art clustering algorithms. We demonstrate on synthetic and real data that our method generates more appropriate and consistent clustering results compared to standard methods. In summary, our proposed analysis framework overcomes the limitations of existing polar coordinate-based clustering methods and provides a more accurate and efficient way to cluster circular data.

* Manuscript is under review in IEEE Transactions on Computational Biology and Bioinformatics. Copyright holder is credited to IEEE 
Viaarxiv icon

Fixating on Attention: Integrating Human Eye Tracking into Vision Transformers

Aug 26, 2023
Sharath Koorathota, Nikolas Papadopoulos, Jia Li Ma, Shruti Kumar, Xiaoxiao Sun, Arunesh Mittal, Patrick Adelman, Paul Sajda

Figure 1 for Fixating on Attention: Integrating Human Eye Tracking into Vision Transformers
Figure 2 for Fixating on Attention: Integrating Human Eye Tracking into Vision Transformers
Figure 3 for Fixating on Attention: Integrating Human Eye Tracking into Vision Transformers
Figure 4 for Fixating on Attention: Integrating Human Eye Tracking into Vision Transformers

Modern transformer-based models designed for computer vision have outperformed humans across a spectrum of visual tasks. However, critical tasks, such as medical image interpretation or autonomous driving, still require reliance on human judgments. This work demonstrates how human visual input, specifically fixations collected from an eye-tracking device, can be integrated into transformer models to improve accuracy across multiple driving situations and datasets. First, we establish the significance of fixation regions in left-right driving decisions, as observed in both human subjects and a Vision Transformer (ViT). By comparing the similarity between human fixation maps and ViT attention weights, we reveal the dynamics of overlap across individual heads and layers. This overlap is exploited for model pruning without compromising accuracy. Thereafter, we incorporate information from the driving scene with fixation data, employing a "joint space-fixation" (JSF) attention setup. Lastly, we propose a "fixation-attention intersection" (FAX) loss to train the ViT model to attend to the same regions that humans fixated on. We find that the ViT performance is improved in accuracy and number of training epochs when using JSF and FAX. These results hold significant implications for human-guided artificial intelligence.

* 25 pages, 9 figures, 3 tables 
Viaarxiv icon

Bayesian Beta-Bernoulli Process Sparse Coding with Deep Neural Networks

Mar 14, 2023
Arunesh Mittal, Kai Yang, Paul Sajda, John Paisley

Figure 1 for Bayesian Beta-Bernoulli Process Sparse Coding with Deep Neural Networks
Figure 2 for Bayesian Beta-Bernoulli Process Sparse Coding with Deep Neural Networks
Figure 3 for Bayesian Beta-Bernoulli Process Sparse Coding with Deep Neural Networks
Figure 4 for Bayesian Beta-Bernoulli Process Sparse Coding with Deep Neural Networks

Several approximate inference methods have been proposed for deep discrete latent variable models. However, non-parametric methods which have previously been successfully employed for classical sparse coding models have largely been unexplored in the context of deep models. We propose a non-parametric iterative algorithm for learning discrete latent representations in such deep models. Additionally, to learn scale invariant discrete features, we propose local data scaling variables. Lastly, to encourage sparsity in our representations, we propose a Beta-Bernoulli process prior on the latent factors. We evaluate our spare coding model coupled with different likelihood models. We evaluate our method across datasets with varying characteristics and compare our results to current amortized approximate inference methods.

Viaarxiv icon

Inferring latent neural sources via deep transcoding of simultaneously acquired EEG and fMRI

Nov 27, 2022
Xueqing Liu, Tao Tu, Paul Sajda

Figure 1 for Inferring latent neural sources via deep transcoding of simultaneously acquired EEG and fMRI
Figure 2 for Inferring latent neural sources via deep transcoding of simultaneously acquired EEG and fMRI
Figure 3 for Inferring latent neural sources via deep transcoding of simultaneously acquired EEG and fMRI
Figure 4 for Inferring latent neural sources via deep transcoding of simultaneously acquired EEG and fMRI

Simultaneous EEG-fMRI is a multi-modal neuroimaging technique that provides complementary spatial and temporal resolution. Challenging has been developing principled and interpretable approaches for fusing the modalities, specifically approaches enabling inference of latent source spaces representative of neural activity. In this paper, we address this inference problem within the framework of transcoding -- mapping from a specific encoding (modality) to a decoding (the latent source space) and then encoding the latent source space to the other modality. Specifically, we develop a symmetric method consisting of a cyclic convolutional transcoder that transcodes EEG to fMRI and vice versa. Without any prior knowledge of either the hemodynamic response function or lead field matrix, the complete data-driven method exploits the temporal and spatial relationships between the modalities and latent source spaces to learn these mappings. We quantify, for both the simulated and real EEG-fMRI data, how well the modalities can be transcoded from one to another as well as the source spaces that are recovered, all evaluated on unseen data. In addition to enabling a new way to symmetrically infer a latent source space, the method can also be seen as low-cost computational neuroimaging -- i.e. generating an 'expensive' fMRI BOLD image from 'low cost' EEG data.

Viaarxiv icon

Improving Prediction of Cognitive Performance using Deep Neural Networks in Sparse Data

Dec 28, 2021
Sharath Koorathota, Arunesh Mittal, Richard P. Sloan, Paul Sajda

Figure 1 for Improving Prediction of Cognitive Performance using Deep Neural Networks in Sparse Data
Figure 2 for Improving Prediction of Cognitive Performance using Deep Neural Networks in Sparse Data
Figure 3 for Improving Prediction of Cognitive Performance using Deep Neural Networks in Sparse Data
Figure 4 for Improving Prediction of Cognitive Performance using Deep Neural Networks in Sparse Data

Cognition in midlife is an important predictor of age-related mental decline and statistical models that predict cognitive performance can be useful for predicting decline. However, existing models struggle to capture complex relationships between physical, sociodemographic, psychological and mental health factors that effect cognition. Using data from an observational, cohort study, Midlife in the United States (MIDUS), we modeled a large number of variables to predict executive function and episodic memory measures. We used cross-sectional and longitudinal outcomes with varying sparsity, or amount of missing data. Deep neural network (DNN) models consistently ranked highest in all of the cognitive performance prediction tasks, as assessed with root mean squared error (RMSE) on out-of-sample data. RMSE differences between DNN and other model types were statistically significant (T(8) = -3.70; p < 0.05). The interaction effect between model type and sparsity was significant (F(9)=59.20; p < 0.01), indicating the success of DNNs can partly be attributed to their robustness and ability to model hierarchical relationships between health-related factors. Our findings underscore the potential of neural networks to model clinical datasets and allow better understanding of factors that lead to cognitive decline.

Viaarxiv icon

Bayesian recurrent state space model for rs-fMRI

Nov 14, 2020
Arunesh Mittal, Scott Linderman, John Paisley, Paul Sajda

Figure 1 for Bayesian recurrent state space model for rs-fMRI

We propose a hierarchical Bayesian recurrent state space model for modeling switching network connectivity in resting state fMRI data. Our model allows us to uncover shared network patterns across disease conditions. We evaluate our method on the ADNI2 dataset by inferring latent state patterns corresponding to altered neural circuits in individuals with Mild Cognitive Impairment (MCI). In addition to states shared across healthy and individuals with MCI, we discover latent states that are predominantly observed in individuals with MCI. Our model outperforms current state of the art deep learning method on ADNI2 dataset.

* Machine Learning for Health (ML4H) at NeurIPS 2020 - Extended Abstract 
Viaarxiv icon

Deep Bayesian Nonparametric Factor Analysis

Nov 09, 2020
Arunesh Mittal, Paul Sajda, John Paisley

Figure 1 for Deep Bayesian Nonparametric Factor Analysis
Figure 2 for Deep Bayesian Nonparametric Factor Analysis
Figure 3 for Deep Bayesian Nonparametric Factor Analysis

We propose a deep generative factor analysis model with beta process prior that can approximate complex non-factorial distributions over the latent codes. We outline a stochastic EM algorithm for scalable inference in a specific instantiation of this model and present some preliminary results.

Viaarxiv icon

Latent neural source recovery via transcoding of simultaneous EEG-fMRI

Oct 05, 2020
Xueqing Liu, Linbi Hong, Paul Sajda

Figure 1 for Latent neural source recovery via transcoding of simultaneous EEG-fMRI
Figure 2 for Latent neural source recovery via transcoding of simultaneous EEG-fMRI
Figure 3 for Latent neural source recovery via transcoding of simultaneous EEG-fMRI
Figure 4 for Latent neural source recovery via transcoding of simultaneous EEG-fMRI

Simultaneous EEG-fMRI is a multi-modal neuroimaging technique that provides complementary spatial and temporal resolution for inferring a latent source space of neural activity. In this paper we address this inference problem within the framework of transcoding -- mapping from a specific encoding (modality) to a decoding (the latent source space) and then encoding the latent source space to the other modality. Specifically, we develop a symmetric method consisting of a cyclic convolutional transcoder that transcodes EEG to fMRI and vice versa. Without any prior knowledge of either the hemodynamic response function or lead field matrix, the method exploits the temporal and spatial relationships between the modalities and latent source spaces to learn these mappings. We show, for real EEG-fMRI data, how well the modalities can be transcoded from one to another as well as the source spaces that are recovered, all on unseen data. In addition to enabling a new way to symmetrically infer a latent source space, the method can also be seen as low-cost computational neuroimaging -- i.e. generating an 'expensive' fMRI BOLD image from 'low cost' EEG data.

Viaarxiv icon

Unsupervised Sparse-view Backprojection via Convolutional and Spatial Transformer Networks

Jun 01, 2020
Xueqing Liu, Paul Sajda

Figure 1 for Unsupervised Sparse-view Backprojection via Convolutional and Spatial Transformer Networks
Figure 2 for Unsupervised Sparse-view Backprojection via Convolutional and Spatial Transformer Networks
Figure 3 for Unsupervised Sparse-view Backprojection via Convolutional and Spatial Transformer Networks
Figure 4 for Unsupervised Sparse-view Backprojection via Convolutional and Spatial Transformer Networks

Many imaging technologies rely on tomographic reconstruction, which requires solving a multidimensional inverse problem given a finite number of projections. Backprojection is a popular class of algorithm for tomographic reconstruction, however it typically results in poor image reconstructions when the projection angles are sparse and/or if the sensors characteristics are not uniform. Several deep learning based algorithms have been developed to solve this inverse problem and reconstruct the image using a limited number of projections. However these algorithms typically require examples of the ground-truth (i.e. examples of reconstructed images) to yield good performance. In this paper, we introduce an unsupervised sparse-view backprojection algorithm, which does not require ground-truth. The algorithm consists of two modules in a generator-projector framework; a convolutional neural network and a spatial transformer network. We evaluated our algorithm using computed tomography (CT) images of the human chest. We show that our algorithm significantly out-performs filtered backprojection when the projection angles are very sparse, as well as when the sensor characteristics vary for different angles. Our approach has practical applications for medical imaging and other imaging modalities (e.g. radar) where sparse and/or non-uniform projections may be acquired due to time or sampling constraints.

Viaarxiv icon

Accelerated Robot Learning via Human Brain Signals

Oct 01, 2019
Iretiayo Akinola, Zizhao Wang, Junyao Shi, Xiaomin He, Pawan Lapborisuth, Jingxi Xu, David Watkins-Valls, Paul Sajda, Peter Allen

Figure 1 for Accelerated Robot Learning via Human Brain Signals
Figure 2 for Accelerated Robot Learning via Human Brain Signals
Figure 3 for Accelerated Robot Learning via Human Brain Signals
Figure 4 for Accelerated Robot Learning via Human Brain Signals

In reinforcement learning (RL), sparse rewards are a natural way to specify the task to be learned. However, most RL algorithms struggle to learn in this setting since the learning signal is mostly zeros. In contrast, humans are good at assessing and predicting the future consequences of actions and can serve as good reward/policy shapers to accelerate the robot learning process. Previous works have shown that the human brain generates an error-related signal, measurable using electroencephelography (EEG), when the human perceives the task being done erroneously. In this work, we propose a method that uses evaluative feedback obtained from human brain signals measured via scalp EEG to accelerate RL for robotic agents in sparse reward settings. As the robot learns the task, the EEG of a human observer watching the robot attempts is recorded and decoded into noisy error feedback signal. From this feedback, we use supervised learning to obtain a policy that subsequently augments the behavior policy and guides exploration in the early stages of RL. This bootstraps the RL learning process to enable learning from sparse reward. Using a robotic navigation task as a test bed, we show that our method achieves a stable obstacle-avoidance policy with high success rate, outperforming learning from sparse rewards only that struggles to achieve obstacle avoidance behavior or fails to advance to the goal.

Viaarxiv icon