Alert button
Picture for Avinandan Bose

Avinandan Bose

Alert button

Scalable Distributional Robustness in a Class of Non Convex Optimization with Guarantees

May 31, 2022
Avinandan Bose, Arunesh Sinha, Tien Mai

Figure 1 for Scalable Distributional Robustness in a Class of Non Convex Optimization with Guarantees
Figure 2 for Scalable Distributional Robustness in a Class of Non Convex Optimization with Guarantees
Figure 3 for Scalable Distributional Robustness in a Class of Non Convex Optimization with Guarantees
Figure 4 for Scalable Distributional Robustness in a Class of Non Convex Optimization with Guarantees

Distributionally robust optimization (DRO) has shown lot of promise in providing robustness in learning as well as sample based optimization problems. We endeavor to provide DRO solutions for a class of sum of fractionals, non-convex optimization which is used for decision making in prominent areas such as facility location and security games. In contrast to previous work, we find it more tractable to optimize the equivalent variance regularized form of DRO rather than the minimax form. We transform the variance regularized form to a mixed-integer second order cone program (MISOCP), which, while guaranteeing near global optimality, does not scale enough to solve problems with real world data-sets. We further propose two abstraction approaches based on clustering and stratified sampling to increase scalability, which we then use for real world data-sets. Importantly, we provide near global optimality guarantees for our approach and show experimentally that our solution quality is better than the locally optimal ones achieved by state-of-the-art gradient-based methods. We experimentally compare our different approaches and baselines, and reveal nuanced properties of a DRO solution.

* 24 pages, 3 figures, 5 tables 
Viaarxiv icon

Multiscale Generative Models: Improving Performance of a Generative Model Using Feedback from Other Dependent Generative Models

Jan 24, 2022
Changyu Chen, Avinandan Bose, Shih-Fen Cheng, Arunesh Sinha

Figure 1 for Multiscale Generative Models: Improving Performance of a Generative Model Using Feedback from Other Dependent Generative Models
Figure 2 for Multiscale Generative Models: Improving Performance of a Generative Model Using Feedback from Other Dependent Generative Models
Figure 3 for Multiscale Generative Models: Improving Performance of a Generative Model Using Feedback from Other Dependent Generative Models
Figure 4 for Multiscale Generative Models: Improving Performance of a Generative Model Using Feedback from Other Dependent Generative Models

Realistic fine-grained multi-agent simulation of real-world complex systems is crucial for many downstream tasks such as reinforcement learning. Recent work has used generative models (GANs in particular) for providing high-fidelity simulation of real-world systems. However, such generative models are often monolithic and miss out on modeling the interaction in multi-agent systems. In this work, we take a first step towards building multiple interacting generative models (GANs) that reflects the interaction in real world. We build and analyze a hierarchical set-up where a higher-level GAN is conditioned on the output of multiple lower-level GANs. We present a technique of using feedback from the higher-level GAN to improve performance of lower-level GANs. We mathematically characterize the conditions under which our technique is impactful, including understanding the transfer learning nature of our set-up. We present three distinct experiments on synthetic data, time series data, and image domain, revealing the wide applicability of our technique.

Viaarxiv icon

Conditional Expectation based Value Decomposition for Scalable On-Demand Ride Pooling

Dec 01, 2021
Avinandan Bose, Pradeep Varakantham

Figure 1 for Conditional Expectation based Value Decomposition for Scalable On-Demand Ride Pooling
Figure 2 for Conditional Expectation based Value Decomposition for Scalable On-Demand Ride Pooling
Figure 3 for Conditional Expectation based Value Decomposition for Scalable On-Demand Ride Pooling
Figure 4 for Conditional Expectation based Value Decomposition for Scalable On-Demand Ride Pooling

Owing to the benefits for customers (lower prices), drivers (higher revenues), aggregation companies (higher revenues) and the environment (fewer vehicles), on-demand ride pooling (e.g., Uber pool, Grab Share) has become quite popular. The significant computational complexity of matching vehicles to combinations of requests has meant that traditional ride pooling approaches are myopic in that they do not consider the impact of current matches on future value for vehicles/drivers. Recently, Neural Approximate Dynamic Programming (NeurADP) has employed value decomposition with Approximate Dynamic Programming (ADP) to outperform leading approaches by considering the impact of an individual agent's (vehicle) chosen actions on the future value of that agent. However, in order to ensure scalability and facilitate city-scale ride pooling, NeurADP completely ignores the impact of other agents actions on individual agent/vehicle value. As demonstrated in our experimental results, ignoring the impact of other agents actions on individual value can have a significant impact on the overall performance when there is increased competition among vehicles for demand. Our key contribution is a novel mechanism based on computing conditional expectations through joint conditional probabilities for capturing dependencies on other agents actions without increasing the complexity of training or decision making. We show that our new approach, Conditional Expectation based Value Decomposition (CEVD) outperforms NeurADP by up to 9.76% in terms of overall requests served, which is a significant improvement on a city wide benchmark taxi dataset.

* Preprint. Under Review. arXiv admin note: text overlap with arXiv:1911.08842 
Viaarxiv icon

Changepoint Analysis of Topic Proportions in Temporal Text Data

Nov 29, 2021
Avinandan Bose, Soumendu Sundar Mukherjee

Figure 1 for Changepoint Analysis of Topic Proportions in Temporal Text Data
Figure 2 for Changepoint Analysis of Topic Proportions in Temporal Text Data
Figure 3 for Changepoint Analysis of Topic Proportions in Temporal Text Data
Figure 4 for Changepoint Analysis of Topic Proportions in Temporal Text Data

Changepoint analysis deals with unsupervised detection and/or estimation of time-points in time-series data, when the distribution generating the data changes. In this article, we consider \emph{offline} changepoint detection in the context of large scale textual data. We build a specialised temporal topic model with provisions for changepoints in the distribution of topic proportions. As full likelihood based inference in this model is computationally intractable, we develop a computationally tractable approximate inference procedure. More specifically, we use sample splitting to estimate topic polytopes first and then apply a likelihood ratio statistic together with a modified version of the wild binary segmentation algorithm of Fryzlewicz et al. (2014). Our methodology facilitates automated detection of structural changes in large corpora without the need of manual processing by domain experts. As changepoints under our model correspond to changes in topic structure, the estimated changepoints are often highly interpretable as marking the surge or decline in popularity of a fashionable topic. We apply our procedure on two large datasets: (i) a corpus of English literature from the period 1800-1922 (Underwoodet al., 2015); (ii) abstracts from the High Energy Physics arXiv repository (Clementet al., 2019). We obtain some historically well-known changepoints and discover some new ones.

* 32 pages, 9 figures 
Viaarxiv icon

NeurInt : Learning to Interpolate through Neural ODEs

Nov 07, 2021
Avinandan Bose, Aniket Das, Yatin Dandi, Piyush Rai

Figure 1 for NeurInt : Learning to Interpolate through Neural ODEs
Figure 2 for NeurInt : Learning to Interpolate through Neural ODEs
Figure 3 for NeurInt : Learning to Interpolate through Neural ODEs
Figure 4 for NeurInt : Learning to Interpolate through Neural ODEs

A wide range of applications require learning image generation models whose latent space effectively captures the high-level factors of variation present in the data distribution. The extent to which a model represents such variations through its latent space can be judged by its ability to interpolate between images smoothly. However, most generative models mapping a fixed prior to the generated images lead to interpolation trajectories lacking smoothness and containing images of reduced quality. In this work, we propose a novel generative model that learns a flexible non-parametric prior over interpolation trajectories, conditioned on a pair of source and target images. Instead of relying on deterministic interpolation methods (such as linear or spherical interpolation in latent space), we devise a framework that learns a distribution of trajectories between two given images using Latent Second-Order Neural Ordinary Differential Equations. Through a hybrid combination of reconstruction and adversarial losses, the generator is trained to map the sampled points from these trajectories to sequences of realistic images that smoothly transition from the source to the target image. Through comprehensive qualitative and quantitative experiments, we demonstrate our approach's effectiveness in generating images of improved quality as well as its ability to learn a diverse distribution over smooth interpolation trajectories for any pair of real source and target images.

* Accepted (Spotlight paper) at the NeurIPS 2021 Workshop on the Symbiosis of Deep Learning and Differential Equations (DLDE) 
Viaarxiv icon