Alert button
Picture for Chandrajit Bajaj

Chandrajit Bajaj

Alert button

GenCorres: Consistent Shape Matching via Coupled Implicit-Explicit Shape Generative Models

Apr 20, 2023
Haitao Yang, Xiangru Huang, Bo Sun, Chandrajit Bajaj, Qixing Huang

Figure 1 for GenCorres: Consistent Shape Matching via Coupled Implicit-Explicit Shape Generative Models
Figure 2 for GenCorres: Consistent Shape Matching via Coupled Implicit-Explicit Shape Generative Models
Figure 3 for GenCorres: Consistent Shape Matching via Coupled Implicit-Explicit Shape Generative Models
Figure 4 for GenCorres: Consistent Shape Matching via Coupled Implicit-Explicit Shape Generative Models

This paper introduces GenCorres, a novel unsupervised joint shape matching (JSM) approach. The basic idea of GenCorres is to learn a parametric mesh generator to fit an unorganized deformable shape collection while constraining deformations between adjacent synthetic shapes to preserve geometric structures such as local rigidity and local conformality. GenCorres presents three appealing advantages over existing JSM techniques. First, GenCorres performs JSM among a synthetic shape collection whose size is much bigger than the input shapes and fully leverages the data-driven power of JSM. Second, GenCorres unifies consistent shape matching and pairwise matching (i.e., by enforcing deformation priors between adjacent synthetic shapes). Third, the generator provides a concise encoding of consistent shape correspondences. However, learning a mesh generator from an unorganized shape collection is challenging. It requires a good initial fitting to each shape and can easily get trapped by local minimums. GenCorres addresses this issue by learning an implicit generator from the input shapes, which provides intermediate shapes between two arbitrary shapes. We introduce a novel approach for computing correspondences between adjacent implicit surfaces and force the correspondences to preserve geometric structures and be cycle-consistent. Synthetic shapes of the implicit generator then guide initial fittings (i.e., via template-based deformation) for learning the mesh generator. Experimental results show that GenCorres considerably outperforms state-of-the-art JSM techniques on benchmark datasets. The synthetic shapes of GenCorres preserve local geometric features and yield competitive performance gains against state-of-the-art deformable shape generators.

Viaarxiv icon

DeblurSR: Event-Based Motion Deblurring Under the Spiking Representation

Mar 15, 2023
Chen Song, Chandrajit Bajaj, Qixing Huang

Figure 1 for DeblurSR: Event-Based Motion Deblurring Under the Spiking Representation
Figure 2 for DeblurSR: Event-Based Motion Deblurring Under the Spiking Representation
Figure 3 for DeblurSR: Event-Based Motion Deblurring Under the Spiking Representation
Figure 4 for DeblurSR: Event-Based Motion Deblurring Under the Spiking Representation

We present DeblurSR, a novel motion deblurring approach that converts a blurry image into a sharp video. DeblurSR utilizes event data to compensate for motion ambiguities and exploits the spiking representation to parameterize the sharp output video as a mapping from time to intensity. Our key contribution, the Spiking Representation (SR), is inspired by the neuromorphic principles determining how biological neurons communicate with each other in living organisms. We discuss why the spikes can represent sharp edges and how the spiking parameters are interpreted from the neuromorphic perspective. DeblurSR has higher output quality and requires fewer computing resources than state-of-the-art event-based motion deblurring methods. We additionally show that our approach easily extends to video super-resolution when combined with recent advances in implicit neural representation. The implementation and animated visualization of DeblurSR are available at https://github.com/chensong1995/DeblurSR.

* 10 pages, 6 figures 
Viaarxiv icon

Solving the Side-Chain Packing Arrangement of Proteins from Reinforcement Learned Stochastic Decision Making

Dec 06, 2022
Chandrajit Bajaj, Conrad Li, Minh Nguyen

Figure 1 for Solving the Side-Chain Packing Arrangement of Proteins from Reinforcement Learned Stochastic Decision Making
Figure 2 for Solving the Side-Chain Packing Arrangement of Proteins from Reinforcement Learned Stochastic Decision Making
Figure 3 for Solving the Side-Chain Packing Arrangement of Proteins from Reinforcement Learned Stochastic Decision Making
Figure 4 for Solving the Side-Chain Packing Arrangement of Proteins from Reinforcement Learned Stochastic Decision Making

Protein structure prediction is a fundamental problem in computational molecular biology. Classical algorithms such as ab-initio or threading as well as many learning methods have been proposed to solve this challenging problem. However, most reinforcement learning methods tend to model the state-action pairs as discrete objects. In this paper, we develop a reinforcement learning (RL) framework in a continuous setting and based on a stochastic parametrized Hamiltonian version of the Pontryagin maximum principle (PMP) to solve the side-chain packing and protein-folding problem. For special cases our formulation can be reduced to previous work where the optimal folding trajectories are trained using an explicit use of Langevin dynamics. Optimal continuous stochastic Hamiltonian dynamics folding pathways can be derived with use of different models of molecular energetics and force fields. In our RL implementation we adopt a soft actor-critic methodology however we can replace this other RL training based on A2C, A3C or PPO.

* 14 pages 5 figures 
Viaarxiv icon

A Particle-based Sparse Gaussian Process Optimizer

Nov 26, 2022
Chandrajit Bajaj, Omatharv Bharat Vaidya, Yi Wang

Figure 1 for A Particle-based Sparse Gaussian Process Optimizer
Figure 2 for A Particle-based Sparse Gaussian Process Optimizer
Figure 3 for A Particle-based Sparse Gaussian Process Optimizer
Figure 4 for A Particle-based Sparse Gaussian Process Optimizer

Task learning in neural networks typically requires finding a globally optimal minimizer to a loss function objective. Conventional designs of swarm based optimization methods apply a fixed update rule, with possibly an adaptive step-size for gradient descent based optimization. While these methods gain huge success in solving different optimization problems, there are some cases where these schemes are either inefficient or suffering from local-minimum. We present a new particle-swarm-based framework utilizing Gaussian Process Regression to learn the underlying dynamical process of descent. The biggest advantage of this approach is greater exploration around the current state before deciding a descent direction. Empirical results show our approach can escape from the local minima compare with the widely-used state-of-the-art optimizers when solving non-convex optimization problems. We also test our approach under high-dimensional parameter space case, namely, image classification task.

Viaarxiv icon

A distribution-dependent Mumford-Shah model for unsupervised hyperspectral image segmentation

Mar 28, 2022
Jan-Christopher Cohrs, Chandrajit Bajaj, Benjamin Berkels

Figure 1 for A distribution-dependent Mumford-Shah model for unsupervised hyperspectral image segmentation
Figure 2 for A distribution-dependent Mumford-Shah model for unsupervised hyperspectral image segmentation
Figure 3 for A distribution-dependent Mumford-Shah model for unsupervised hyperspectral image segmentation
Figure 4 for A distribution-dependent Mumford-Shah model for unsupervised hyperspectral image segmentation

Hyperspectral images provide a rich representation of the underlying spectrum for each pixel, allowing for a pixel-wise classification/segmentation into different classes. As the acquisition of labeled training data is very time-consuming, unsupervised methods become crucial in hyperspectral image analysis. The spectral variability and noise in hyperspectral data make this task very challenging and define special requirements for such methods. Here, we present a novel unsupervised hyperspectral segmentation framework. It starts with a denoising and dimensionality reduction step by the well-established Minimum Noise Fraction (MNF) transform. Then, the Mumford-Shah (MS) segmentation functional is applied to segment the data. We equipped the MS functional with a novel robust distribution-dependent indicator function designed to handle the characteristic challenges of hyperspectral data. To optimize our objective function with respect to the parameters for which no closed form solution is available, we propose an efficient fixed point iteration scheme. Numerical experiments on four public benchmark datasets show that our method produces competitive results, which outperform two state-of-the-art methods substantially on three of these datasets.

Viaarxiv icon

E-CIR: Event-Enhanced Continuous Intensity Recovery

Mar 03, 2022
Chen Song, Qixing Huang, Chandrajit Bajaj

Figure 1 for E-CIR: Event-Enhanced Continuous Intensity Recovery
Figure 2 for E-CIR: Event-Enhanced Continuous Intensity Recovery
Figure 3 for E-CIR: Event-Enhanced Continuous Intensity Recovery
Figure 4 for E-CIR: Event-Enhanced Continuous Intensity Recovery

A camera begins to sense light the moment we press the shutter button. During the exposure interval, relative motion between the scene and the camera causes motion blur, a common undesirable visual artifact. This paper presents E-CIR, which converts a blurry image into a sharp video represented as a parametric function from time to intensity. E-CIR leverages events as an auxiliary input. We discuss how to exploit the temporal event structure to construct the parametric bases. We demonstrate how to train a deep learning model to predict the function coefficients. To improve the appearance consistency, we further introduce a refinement module to propagate visual features among consecutive frames. Compared to state-of-the-art event-enhanced deblurring approaches, E-CIR generates smoother and more realistic results. The implementation of E-CIR is available at https://github.com/chensong1995/E-CIR.

* Accepted by CVPR 2022 
Viaarxiv icon

Learning Optimal Control with Stochastic Models of Hamiltonian Dynamics

Nov 15, 2021
Chandrajit Bajaj, Minh Nguyen

Figure 1 for Learning Optimal Control with Stochastic Models of Hamiltonian Dynamics
Figure 2 for Learning Optimal Control with Stochastic Models of Hamiltonian Dynamics
Figure 3 for Learning Optimal Control with Stochastic Models of Hamiltonian Dynamics
Figure 4 for Learning Optimal Control with Stochastic Models of Hamiltonian Dynamics

Optimal control problems can be solved by first applying the Pontryagin maximum principle, followed by computing a solution of the corresponding unconstrained Hamiltonian dynamical system. In this paper, and to achieve a balance between robustness and efficiency, we learn a reduced Hamiltonian of the unconstrained Hamiltonian. This reduced Hamiltonian is learned by going backward in time and by minimizing the loss function resulting from application of the Pontryagin maximum principle conditions. The robustness of our learning process is then further improved by progressively learning a posterior distribution of reduced Hamiltonians. This leads to a more efficient sampling of the generalized coordinates (position, velocity) of our phase space. Our solution framework applies to not only optimal control problems with finite-dimensional phase (state) spaces but also the infinite dimensional case.

* 11 pages, 11 figures 
Viaarxiv icon

Recursive Self-Improvement for Camera Image and Signal Processing Pipeline

Nov 15, 2021
Chandrajit Bajaj, Yi Wang, Yunhao Yang, Yuhan Zheng

Figure 1 for Recursive Self-Improvement for Camera Image and Signal Processing Pipeline
Figure 2 for Recursive Self-Improvement for Camera Image and Signal Processing Pipeline
Figure 3 for Recursive Self-Improvement for Camera Image and Signal Processing Pipeline
Figure 4 for Recursive Self-Improvement for Camera Image and Signal Processing Pipeline

Current camera image and signal processing pipelines (ISPs), including deep trained versions, tend to apply a single filter that is uniformly applied to the entire image. This despite the fact that most acquired camera images have spatially heterogeneous artifacts. This spatial heterogeneity manifests itself across the image space as varied Moire ringing, motion-blur, color-bleaching or lens based projection distortions. Moreover, combinations of these image artifacts can be present in small or large pixel neighborhoods, within an acquired image. Here, we present a deep reinforcement learning model that works in learned latent subspaces, recursively improves camera image quality through a patch-based spatially adaptive artifact filtering and image enhancement. Our RSE-RL model views the identification and correction of artifacts as a recursive self-learning and self-improvement exercise and consists of two major sub-modules: (i) The latent feature sub-space clustering/grouping obtained through an equivariant variational auto-encoder enabling rapid identification of the correspondence and discrepancy between noisy and clean image patches. (ii) The adaptive learned transformation controlled by a trust-region soft actor-critic agent that progressively filters and enhances the noisy patches using its closest feature distance neighbors of clean patches. Artificial artifacts that may be introduced in a patch-based ISP, are also removed through a reward based de-blocking recovery and image enhancement. We demonstrate the self-improvement feature of our model by recursively training and testing on images, wherein the enhanced images resulting from each epoch provide a natural data augmentation and robustness to the RSE-RL training-filtering pipeline.

* arXiv admin note: substantial text overlap with arXiv:2104.00253 
Viaarxiv icon

Robust Learning of Physics Informed Neural Networks

Oct 26, 2021
Chandrajit Bajaj, Luke McLennan, Timothy Andeen, Avik Roy

Figure 1 for Robust Learning of Physics Informed Neural Networks
Figure 2 for Robust Learning of Physics Informed Neural Networks
Figure 3 for Robust Learning of Physics Informed Neural Networks
Figure 4 for Robust Learning of Physics Informed Neural Networks

Physics-informed Neural Networks (PINNs) have been shown to be effective in solving partial differential equations by capturing the physics induced constraints as a part of the training loss function. This paper shows that a PINN can be sensitive to errors in training data and overfit itself in dynamically propagating these errors over the domain of the solution of the PDE. It also shows how physical regularizations based on continuity criteria and conservation laws fail to address this issue and rather introduce problems of their own causing the deep network to converge to a physics-obeying local minimum instead of the global minimum. We introduce Gaussian Process (GP) based smoothing that recovers the performance of a PINN and promises a robust architecture against noise/errors in measurements. Additionally, we illustrate an inexpensive method of quantifying the evolution of uncertainty based on the variance estimation of GPs on boundary data. Robust PINN performance is also shown to be achievable by choice of sparse sets of inducing points based on sparsely induced GPs. We demonstrate the performance of our proposed methods and compare the results from existing benchmark models in literature for time-dependent Schr\"odinger and Burgers' equations.

Viaarxiv icon

Scene Synthesis via Uncertainty-Driven Attribute Synchronization

Sep 01, 2021
Haitao Yang, Zaiwei Zhang, Siming Yan, Haibin Huang, Chongyang Ma, Yi Zheng, Chandrajit Bajaj, Qixing Huang

Figure 1 for Scene Synthesis via Uncertainty-Driven Attribute Synchronization
Figure 2 for Scene Synthesis via Uncertainty-Driven Attribute Synchronization
Figure 3 for Scene Synthesis via Uncertainty-Driven Attribute Synchronization
Figure 4 for Scene Synthesis via Uncertainty-Driven Attribute Synchronization

Developing deep neural networks to generate 3D scenes is a fundamental problem in neural synthesis with immediate applications in architectural CAD, computer graphics, as well as in generating virtual robot training environments. This task is challenging because 3D scenes exhibit diverse patterns, ranging from continuous ones, such as object sizes and the relative poses between pairs of shapes, to discrete patterns, such as occurrence and co-occurrence of objects with symmetrical relationships. This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes. Our method combines the strength of both neural network-based and conventional scene synthesis approaches. We use the parametric prior distributions learned from training data, which provide uncertainties of object attributes and relative attributes, to regularize the outputs of feed-forward neural models. Moreover, instead of merely predicting a scene layout, our approach predicts an over-complete set of attributes. This methodology allows us to utilize the underlying consistency constraints among the predicted attributes to prune infeasible predictions. Experimental results show that our approach outperforms existing methods considerably. The generated 3D scenes interpolate the training data faithfully while preserving both continuous and discrete feature patterns.

* Published at ICCV2021 
Viaarxiv icon