Alert button
Picture for Daniel Jiwoong Im

Daniel Jiwoong Im

Alert button

Active and Passive Causal Inference Learning

Aug 18, 2023
Daniel Jiwoong Im, Kyunghyun Cho

Figure 1 for Active and Passive Causal Inference Learning
Figure 2 for Active and Passive Causal Inference Learning
Figure 3 for Active and Passive Causal Inference Learning
Figure 4 for Active and Passive Causal Inference Learning

This paper serves as a starting point for machine learning researchers, engineers and students who are interested in but not yet familiar with causal inference. We start by laying out an important set of assumptions that are collectively needed for causal identification, such as exchangeability, positivity, consistency and the absence of interference. From these assumptions, we build out a set of important causal inference techniques, which we do so by categorizing them into two buckets; active and passive approaches. We describe and discuss randomized controlled trials and bandit-based approaches from the active category. We then describe classical approaches, such as matching and inverse probability weighting, in the passive category, followed by more recent deep learning based algorithms. By finishing the paper with some of the missing aspects of causal inference from this paper, such as collider biases, we expect this paper to provide readers with a diverse set of starting points for further reading and research in causal inference and discovery.

Viaarxiv icon

UAMM: UBET Automated Market Maker

Aug 11, 2023
Daniel Jiwoong Im, Alexander Kondratskiy, Vincent Harvey, Hsuan-Wei Fu

Automated market makers (AMMs) are pricing mechanisms utilized by decentralized exchanges (DEX). Traditional AMM approaches are constrained by pricing solely based on their own liquidity pool, without consideration of external markets or risk management for liquidity providers. In this paper, we propose a new approach known as UBET AMM (UAMM), which calculates prices by considering external market prices and the impermanent loss of the liquidity pool. Despite relying on external market prices, our method maintains the desired properties of a constant product curve when computing slippages. The key element of UAMM is determining the appropriate slippage amount based on the desired target balance, which encourages the liquidity pool to minimize impermanent loss. We demonstrate that our approach eliminates arbitrage opportunities when external market prices are efficient.

Viaarxiv icon

Causal Effect Variational Autoencoder with Uniform Treatment

Nov 16, 2021
Daniel Jiwoong Im, Kyunghyun Cho, Narges Razavian

Figure 1 for Causal Effect Variational Autoencoder with Uniform Treatment
Figure 2 for Causal Effect Variational Autoencoder with Uniform Treatment
Figure 3 for Causal Effect Variational Autoencoder with Uniform Treatment
Figure 4 for Causal Effect Variational Autoencoder with Uniform Treatment

Causal effect variational autoencoder (CEVAE) are trained to predict the outcome given observational treatment data, while uniform treatment variational autoencoders (UTVAE) are trained with uniform treatment distribution using importance sampling. In this paper, we show that using uniform treatment over observational treatment distribution leads to better causal inference by mitigating the distribution shift that occurs from training to test time. We also explore the combination of uniform and observational treatment distributions with inference and generative network training objectives to find a better training procedure for inferring treatment effect. Experimentally, we find that the proposed UTVAE yields better absolute average treatment effect error and precision in estimation of heterogeneous effect error than the CEVAE on synthetic and IHDP datasets.

Viaarxiv icon

Online hyperparameter optimization by real-time recurrent learning

Feb 15, 2021
Daniel Jiwoong Im, Cristina Savin, Kyunghyun Cho

Figure 1 for Online hyperparameter optimization by real-time recurrent learning
Figure 2 for Online hyperparameter optimization by real-time recurrent learning
Figure 3 for Online hyperparameter optimization by real-time recurrent learning
Figure 4 for Online hyperparameter optimization by real-time recurrent learning

Conventional hyperparameter optimization methods are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning. Here, we propose an online hyperparameter optimization algorithm that is asymptotically exact and computationally tractable, both theoretically and practically. Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in recurrent neural networks (RNNs). It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously, without repeatedly rolling out iterative optimization. This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.

Viaarxiv icon

Evaluation metrics for behaviour modeling

Jul 23, 2020
Daniel Jiwoong Im, Iljung Kwak, Kristin Branson

Figure 1 for Evaluation metrics for behaviour modeling
Figure 2 for Evaluation metrics for behaviour modeling
Figure 3 for Evaluation metrics for behaviour modeling
Figure 4 for Evaluation metrics for behaviour modeling

A primary difficulty with unsupervised discovery of structure in large data sets is a lack of quantitative evaluation criteria. In this work, we propose and investigate several metrics for evaluating and comparing generative models of behavior learned using imitation learning. Compared to the commonly-used model log-likelihood, these criteria look at longer temporal relationships in behavior, are relevant if behavior has some properties that are inherently unpredictable, and highlight biases in the overall distribution of behaviors produced by the model. Pointwise metrics compare real to model-predicted trajectories given true past information. Distribution metrics compare statistics of the model simulating behavior in open loop, and are inspired by how experimental biologists evaluate the effects of manipulations on animal behavior. We show that the proposed metrics correspond with biologists' intuitions about behavior, and allow us to evaluate models, understand their biases, and enable us to propose new research directions.

* 17 pages 
Viaarxiv icon

Are skip connections necessary for biologically plausible learning rules?

Dec 04, 2019
Daniel Jiwoong Im, Rutuja Patil, Kristin Branson

Figure 1 for Are skip connections necessary for biologically plausible learning rules?
Figure 2 for Are skip connections necessary for biologically plausible learning rules?

Backpropagation is the workhorse of deep learning, however, several other biologically-motivated learning rules have been introduced, such as random feedback alignment and difference target propagation. None of these methods have produced a competitive performance against backpropagation. In this paper, we show that biologically-motivated learning rules with skip connections between intermediate layers can perform as well as backpropagation on the MNIST dataset and are robust to various sets of hyper-parameters.

Viaarxiv icon

Model-Agnostic Meta-Learning using Runge-Kutta Methods

Oct 17, 2019
Daniel Jiwoong Im, Yibo Jiang, Nakul Verma

Figure 1 for Model-Agnostic Meta-Learning using Runge-Kutta Methods
Figure 2 for Model-Agnostic Meta-Learning using Runge-Kutta Methods
Figure 3 for Model-Agnostic Meta-Learning using Runge-Kutta Methods
Figure 4 for Model-Agnostic Meta-Learning using Runge-Kutta Methods

Meta-learning has emerged as an important framework for learning new tasks from just a few examples. The success of any meta-learning model depends on (i) its fast adaptation to new tasks, as well as (ii) having a shared representation across similar tasks. Here we extend the model-agnostic meta-learning (MAML) framework introduced by Finn et al. (2017) to achieve improved performance by analyzing the temporal dynamics of the optimization procedure via the Runge-Kutta method. This method enables us to gain fine-grained control over the optimization and helps us achieve both the adaptation and representation goals across tasks. By leveraging this refined control, we demonstrate that there are multiple principled ways to update MAML and show that the classic MAML optimization is simply a special case of second-order Runge-Kutta method that mainly focuses on fast-adaptation. Experiments on benchmark classification, regression and reinforcement learning tasks show that this refined control helps attain improved results.

Viaarxiv icon

Importance Weighted Adversarial Variational Autoencoders for Spike Inference from Calcium Imaging Data

Jun 07, 2019
Daniel Jiwoong Im, Sridhama Prakhya, Jinyao Yan, Srinivas Turaga, Kristin Branson

Figure 1 for Importance Weighted Adversarial Variational Autoencoders for Spike Inference from Calcium Imaging Data
Figure 2 for Importance Weighted Adversarial Variational Autoencoders for Spike Inference from Calcium Imaging Data
Figure 3 for Importance Weighted Adversarial Variational Autoencoders for Spike Inference from Calcium Imaging Data
Figure 4 for Importance Weighted Adversarial Variational Autoencoders for Spike Inference from Calcium Imaging Data

The Importance Weighted Auto Encoder (IWAE) objective has been shown to improve the training of generative models over the standard Variational Auto Encoder (VAE) objective. Here, we derive importance weighted extensions to AVB and AAE. These latent variable models use implicitly defined inference networks whose approximate posterior density q_\phi(z|x) cannot be directly evaluated, an essential ingredient for importance weighting. We show improved training and inference in latent variable models with our adversarially trained importance weighting method, and derive new theoretical connections between adversarial generative model training criteria and marginal likelihood based methods. We apply these methods to the important problem of inferring spiking neural activity from calcium imaging data, a challenging posterior inference problem in neuroscience, and show that posterior samples from the adversarial methods outperform factorized posteriors used in VAEs.

Viaarxiv icon

Stochastic Neighbor Embedding under f-divergences

Nov 03, 2018
Daniel Jiwoong Im, Nakul Verma, Kristin Branson

Figure 1 for Stochastic Neighbor Embedding under f-divergences
Figure 2 for Stochastic Neighbor Embedding under f-divergences
Figure 3 for Stochastic Neighbor Embedding under f-divergences
Figure 4 for Stochastic Neighbor Embedding under f-divergences

The t-distributed Stochastic Neighbor Embedding (t-SNE) is a powerful and popular method for visualizing high-dimensional data. It minimizes the Kullback-Leibler (KL) divergence between the original and embedded data distributions. In this work, we propose extending this method to other f-divergences. We analytically and empirically evaluate the types of latent structure-manifold, cluster, and hierarchical-that are well-captured using both the original KL-divergence as well as the proposed f-divergence generalization, and find that different divergences perform better for different types of structure. A common concern with $t$-SNE criterion is that it is optimized using gradient descent, and can become stuck in poor local minima. We propose optimizing the f-divergence based loss criteria by minimizing a variational bound. This typically performs better than optimizing the primal form, and our experiments show that it can improve upon the embedding results obtained from the original $t$-SNE criterion as well.

Viaarxiv icon

Quantitatively Evaluating GANs With Divergences Proposed for Training

Apr 28, 2018
Daniel Jiwoong Im, He Ma, Graham Taylor, Kristin Branson

Figure 1 for Quantitatively Evaluating GANs With Divergences Proposed for Training
Figure 2 for Quantitatively Evaluating GANs With Divergences Proposed for Training
Figure 3 for Quantitatively Evaluating GANs With Divergences Proposed for Training
Figure 4 for Quantitatively Evaluating GANs With Divergences Proposed for Training

Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application. However, we currently lack quantitative methods for model assessment. Because of this, while many GAN variants are being proposed, we have relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics and, interestingly, the test-time metrics do not favour networks that use the same training-time criterion. We also compare the proposed metrics to human perceptual scores.

* ICLR 2018 
Viaarxiv icon