Alert button
Picture for Minh-Ngoc Tran

Minh-Ngoc Tran

Alert button

DeepVol: A Deep Transfer Learning Approach for Universal Asset Volatility Modeling

Sep 05, 2023
Chen Liu, Minh-Ngoc Tran, Chao Wang, Richard Gerlach, Robert Kohn

Figure 1 for DeepVol: A Deep Transfer Learning Approach for Universal Asset Volatility Modeling
Figure 2 for DeepVol: A Deep Transfer Learning Approach for Universal Asset Volatility Modeling
Figure 3 for DeepVol: A Deep Transfer Learning Approach for Universal Asset Volatility Modeling
Figure 4 for DeepVol: A Deep Transfer Learning Approach for Universal Asset Volatility Modeling

This paper introduces DeepVol, a promising new deep learning volatility model that outperforms traditional econometric models in terms of model generality. DeepVol leverages the power of transfer learning to effectively capture and model the volatility dynamics of all financial assets, including previously unseen ones, using a single universal model. This contrasts to the prevailing practice in econometrics literature, which necessitates training separate models for individual datasets. The introduction of DeepVol opens up new avenues for volatility modeling and forecasting in the finance industry, potentially transforming the way volatility is understood and predicted.

Viaarxiv icon

Wasserstein Gaussianization and Efficient Variational Bayes for Robust Bayesian Synthetic Likelihood

May 24, 2023
Nhat-Minh Nguyen, Minh-Ngoc Tran, Christopher Drovandi, David Nott

Figure 1 for Wasserstein Gaussianization and Efficient Variational Bayes for Robust Bayesian Synthetic Likelihood
Figure 2 for Wasserstein Gaussianization and Efficient Variational Bayes for Robust Bayesian Synthetic Likelihood
Figure 3 for Wasserstein Gaussianization and Efficient Variational Bayes for Robust Bayesian Synthetic Likelihood
Figure 4 for Wasserstein Gaussianization and Efficient Variational Bayes for Robust Bayesian Synthetic Likelihood

The Bayesian Synthetic Likelihood (BSL) method is a widely-used tool for likelihood-free Bayesian inference. This method assumes that some summary statistics are normally distributed, which can be incorrect in many applications. We propose a transformation, called the Wasserstein Gaussianization transformation, that uses a Wasserstein gradient flow to approximately transform the distribution of the summary statistics into a Gaussian distribution. BSL also implicitly requires compatibility between simulated summary statistics under the working model and the observed summary statistics. A robust BSL variant which achieves this has been developed in the recent literature. We combine the Wasserstein Gaussianization transformation with robust BSL, and an efficient Variational Bayes procedure for posterior approximation, to develop a highly efficient and reliable approximate Bayesian inference method for likelihood-free problems.

Viaarxiv icon

Particle Mean Field Variational Bayes

Mar 24, 2023
Minh-Ngoc Tran, Paco Tseng, Robert Kohn

Figure 1 for Particle Mean Field Variational Bayes
Figure 2 for Particle Mean Field Variational Bayes
Figure 3 for Particle Mean Field Variational Bayes
Figure 4 for Particle Mean Field Variational Bayes

The Mean Field Variational Bayes (MFVB) method is one of the most computationally efficient techniques for Bayesian inference. However, its use has been restricted to models with conjugate priors or those that require analytical calculations. This paper proposes a novel particle-based MFVB approach that greatly expands the applicability of the MFVB method. We establish the theoretical basis of the new method by leveraging the connection between Wasserstein gradient flows and Langevin diffusion dynamics, and demonstrate the effectiveness of this approach using Bayesian logistic regression, stochastic volatility, and deep neural networks.

Viaarxiv icon

Realized recurrent conditional heteroskedasticity model for volatility modelling

Feb 16, 2023
Chen Liu, Chao Wang, Minh-Ngoc Tran, Robert Kohn

Figure 1 for Realized recurrent conditional heteroskedasticity model for volatility modelling
Figure 2 for Realized recurrent conditional heteroskedasticity model for volatility modelling
Figure 3 for Realized recurrent conditional heteroskedasticity model for volatility modelling
Figure 4 for Realized recurrent conditional heteroskedasticity model for volatility modelling

We propose a new approach to volatility modelling by combining deep learning (LSTM) and realized volatility measures. This LSTM-enhanced realized GARCH framework incorporates and distills modeling advances from financial econometrics, high frequency trading data and deep learning. Bayesian inference via the Sequential Monte Carlo method is employed for statistical inference and forecasting. The new framework can jointly model the returns and realized volatility measures, has an excellent in-sample fit and superior predictive performance compared to several benchmark models, while being able to adapt well to the stylized facts in volatility. The performance of the new framework is tested using a wide range of metrics, from marginal likelihood, volatility forecasting, to tail risk forecasting and option pricing. We report on a comprehensive empirical study using 31 widely traded stock indices over a time period that includes COVID-19 pandemic.

* 47 pages, 12 tables 
Viaarxiv icon

An Introduction to Quantum Computing for Statisticians

Dec 13, 2021
Anna Lopatnikova, Minh-Ngoc Tran

Figure 1 for An Introduction to Quantum Computing for Statisticians

Quantum computing has the potential to revolutionise and change the way we live and understand the world. This review aims to provide an accessible introduction to quantum computing with a focus on applications in statistics and data analysis. We start with an introduction to the basic concepts necessary to understand quantum computing and the differences between quantum and classical computing. We describe the core quantum subroutines that serve as the building blocks of quantum algorithms. We then review a range of quantum algorithms expected to deliver a computational advantage in statistics and machine learning. We highlight the challenges and opportunities in applying quantum computing to problems in statistics and discuss potential future research directions.

* 80 pages 
Viaarxiv icon

Quantum Natural Gradient for Variational Bayes

Jul 08, 2021
Anna Lopatnikova, Minh-Ngoc Tran

Figure 1 for Quantum Natural Gradient for Variational Bayes

Variational Bayes (VB) is a critical method in machine learning and statistics, underpinning the recent success of Bayesian deep learning. The natural gradient is an essential component of efficient VB estimation, but it is prohibitively computationally expensive in high dimensions. We propose a hybrid quantum-classical algorithm to improve the scaling properties of natural gradient computation and make VB a truly computationally efficient method for Bayesian inference in highdimensional settings. The algorithm leverages matrix inversion from the linear systems algorithm by Harrow, Hassidim, and Lloyd [Phys. Rev. Lett. 103, 15 (2009)] (HHL). We demonstrate that the matrix to be inverted is sparse and the classical-quantum-classical handoffs are sufficiently economical to preserve computational efficiency, making the problem of natural gradient for VB an ideal application of HHL. We prove that, under standard conditions, the VB algorithm with quantum natural gradient is guaranteed to converge. Our regression-based natural gradient formulation is also highly useful for classical VB.

Viaarxiv icon

A practical tutorial on Variational Bayes

Mar 01, 2021
Minh-Ngoc Tran, Trong-Nghia Nguyen, Viet-Hung Dao

Figure 1 for A practical tutorial on Variational Bayes
Figure 2 for A practical tutorial on Variational Bayes
Figure 3 for A practical tutorial on Variational Bayes
Figure 4 for A practical tutorial on Variational Bayes

This tutorial gives a quick introduction to Variational Bayes (VB), also called Variational Inference or Variational Approximation, from a practical point of view. The paper covers a range of commonly used VB methods and an attempt is made to keep the materials accessible to the wide community of data analysis practitioners. The aim is that the reader can quickly derive and implement their first VB algorithm for Bayesian inference with their data analysis problem. An end-user software package in Matlab together with the documentation can be found at https://vbayeslab.github.io/VBLabDocs/

* 43 pages, 9 figures, 3 tables 
Viaarxiv icon

Adaptive Multi-level Hyper-gradient Descent

Aug 19, 2020
Renlong Jie, Junbin Gao, Andrey Vasnev, Minh-Ngoc Tran

Figure 1 for Adaptive Multi-level Hyper-gradient Descent
Figure 2 for Adaptive Multi-level Hyper-gradient Descent
Figure 3 for Adaptive Multi-level Hyper-gradient Descent
Figure 4 for Adaptive Multi-level Hyper-gradient Descent

Adaptive learning rates can lead to faster convergence and better final performance for deep learning models. There are several widely known human-designed adaptive optimizers such as Adam and RMSProp, gradient based adaptive methods such as hyper-descent and L4, and meta learning approaches including learning to learn. However, the issue of balancing adaptiveness and over-parameterization is still a topic to be addressed. In this study, we investigate different levels of learning rate adaptation based on the framework of hyper-gradient descent, and further propose a method that adaptively learns the model parameters for combining different levels of adaptations. Meanwhile, we show the relationship between adding regularization on over-parameterized learning rates and building combinations of different levels of adaptive learning rates. The experiments on several network architectures including feed-forward networks, LeNet-5 and ResNet-34 show that the proposed multi-level adaptive approach can outperform baseline adaptive methods in a variety circumstances with statistical significance.

Viaarxiv icon