Alert button
Picture for Truong Son Hy

Truong Son Hy

Alert button

Sparsity exploitation via discovering graphical models in multi-variate time-series forecasting

Jun 29, 2023
Ngoc-Dung Do, Truong Son Hy, Duy Khuong Nguyen

Figure 1 for Sparsity exploitation via discovering graphical models in multi-variate time-series forecasting
Figure 2 for Sparsity exploitation via discovering graphical models in multi-variate time-series forecasting
Figure 3 for Sparsity exploitation via discovering graphical models in multi-variate time-series forecasting
Figure 4 for Sparsity exploitation via discovering graphical models in multi-variate time-series forecasting

Graph neural networks (GNNs) have been widely applied in multi-variate time-series forecasting (MTSF) tasks because of their capability in capturing the correlations among different time-series. These graph-based learning approaches improve the forecasting performance by discovering and understanding the underlying graph structures, which represent the data correlation. When the explicit prior graph structures are not available, most existing works cannot guarantee the sparsity of the generated graphs that make the overall model computational expensive and less interpretable. In this work, we propose a decoupled training method, which includes a graph generating module and a GNNs forecasting module. First, we use Graphical Lasso (or GraphLASSO) to directly exploit the sparsity pattern from data to build graph structures in both static and time-varying cases. Second, we fit these graph structures and the input data into a Graph Convolutional Recurrent Network (GCRN) to train a forecasting model. The experimental results on three real-world datasets show that our novel approach has competitive performance against existing state-of-the-art forecasting algorithms while providing sparse, meaningful and explainable graph structures and reducing training time by approximately 40%. Our PyTorch implementation is publicly available at https://github.com/HySonLab/GraphLASSO

Viaarxiv icon

Neural Multigrid Memory For Computational Fluid Dynamics

Jun 24, 2023
Duc Minh Nguyen, Minh Chau Vu, Tuan Anh Nguyen, Tri Huynh, Nguyen Tri Nguyen, Truong Son Hy

Figure 1 for Neural Multigrid Memory For Computational Fluid Dynamics
Figure 2 for Neural Multigrid Memory For Computational Fluid Dynamics
Figure 3 for Neural Multigrid Memory For Computational Fluid Dynamics
Figure 4 for Neural Multigrid Memory For Computational Fluid Dynamics

Turbulent flow simulation plays a crucial role in various applications, including aircraft and ship design, industrial process optimization, and weather prediction. In this paper, we propose an advanced data-driven method for simulating turbulent flow, representing a significant improvement over existing approaches. Our methodology combines the strengths of Video Prediction Transformer (VPTR) (Ye & Bilodeau, 2022) and Multigrid Architecture (MgConv, MgResnet) (Ke et al., 2017). VPTR excels in capturing complex spatiotemporal dependencies and handling large input data, making it a promising choice for turbulent flow prediction. Meanwhile, Multigrid Architecture utilizes multiple grids with different resolutions to capture the multiscale nature of turbulent flows, resulting in more accurate and efficient simulations. Through our experiments, we demonstrate the effectiveness of our proposed approach, named MGxTransformer, in accurately predicting velocity, temperature, and turbulence intensity for incompressible turbulent flows across various geometries and flow conditions. Our results exhibit superior accuracy compared to other baselines, while maintaining computational efficiency. Our implementation in PyTorch is available publicly at https://github.com/Combi2k2/MG-Turbulent-Flow

* arXiv admin note: text overlap with arXiv:1911.08655 by other authors 
Viaarxiv icon

Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study

May 12, 2023
Viet Bach Nguyen, Truong Son Hy, Long Tran-Thanh, Nhung Nghiem

Figure 1 for Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study
Figure 2 for Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study
Figure 3 for Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study
Figure 4 for Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study

Modeling and simulations of pandemic dynamics play an essential role in understanding and addressing the spreading of highly infectious diseases such as COVID-19. In this work, we propose a novel deep learning architecture named Attention-based Multiresolution Graph Neural Networks (ATMGNN) that learns to combine the spatial graph information, i.e. geographical data, with the temporal information, i.e. timeseries data of number of COVID-19 cases, to predict the future dynamics of the pandemic. The key innovation is that our method can capture the multiscale structures of the spatial graph via a learning to cluster algorithm in a data-driven manner. This allows our architecture to learn to pick up either local or global signals of a pandemic, and model both the long-range spatial and temporal dependencies. Importantly, we collected and assembled a new dataset for New Zealand. We established a comprehensive benchmark of statistical methods, temporal architectures, graph neural networks along with our spatio-temporal model. We also incorporated socioeconomic cross-sectional data to further enhance our prediction. Our proposed model have shown highly robust predictions and outperformed all other baselines in various metrics for our new dataset of New Zealand along with existing datasets of England, France, Italy and Spain. For a future work, we plan to extend our work for real-time prediction and global scale. Our data and source code are publicly available at https://github.com/HySonLab/pandemic_tgnn

Viaarxiv icon

Fast Temporal Wavelet Graph Neural Networks

Feb 25, 2023
Duc Thien Nguyen, Manh Duc Tuan Nguyen, Truong Son Hy, Risi Kondor

Figure 1 for Fast Temporal Wavelet Graph Neural Networks
Figure 2 for Fast Temporal Wavelet Graph Neural Networks
Figure 3 for Fast Temporal Wavelet Graph Neural Networks
Figure 4 for Fast Temporal Wavelet Graph Neural Networks

Spatio-temporal signals forecasting plays an important role in numerous domains, especially in neuroscience and transportation. The task is challenging due to the highly intricate spatial structure, as well as the non-linear temporal dynamics of the network. To facilitate reliable and timely forecast for the human brain and traffic networks, we propose the Fast Temporal Wavelet Graph Neural Networks (FTWGNN) that is both time- and memory-efficient for learning tasks on timeseries data with the underlying graph structure, thanks to the theories of multiresolution analysis and wavelet theory on discrete spaces. We employ Multiresolution Matrix Factorization (MMF) (Kondor et al., 2014) to factorize the highly dense graph structure and compute the corresponding sparse wavelet basis that allows us to construct fast wavelet convolution as the backbone of our novel architecture. Experimental results on real-world PEMS-BAY, METR-LA traffic datasets and AJILE12 ECoG dataset show that FTWGNN is competitive with the state-of-the-arts while maintaining a low computational footprint. Our PyTorch implementation is publicly available at https://github.com/HySonLab/TWGNN.

* arXiv admin note: text overlap with arXiv:2111.01940 
Viaarxiv icon

Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Hierarchical Structures

Feb 25, 2023
Nhat Khang Ngo, Truong Son Hy, Risi Kondor

Figure 1 for Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Hierarchical Structures
Figure 2 for Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Hierarchical Structures
Figure 3 for Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Hierarchical Structures
Figure 4 for Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Hierarchical Structures

Contemporary graph learning algorithms are not well-defined for large molecules since they do not consider the hierarchical interactions among the atoms, which are essential to determine the molecular properties of macromolecules. In this work, we propose Multiresolution Graph Transformers (MGT), the first graph transformer architecture that can learn to represent large molecules at multiple scales. MGT can learn to produce representations for the atoms and group them into meaningful functional groups or repeating units. We also introduce Wavelet Positional Encoding (WavePE), a new positional encoding method that can guarantee localization in both spectral and spatial domains. Our approach achieves competitive results on two macromolecule datasets consisting of polymers and peptides. Furthermore, the visualizations, including clustering results on macromolecules and low-dimensional spaces of their representations, demonstrate the capability of our methodology in learning to represent long-range and hierarchical structures. Our PyTorch implementation is publicly available at https://github.com/HySonLab/Multires-Graph-Transformer.

Viaarxiv icon

Modeling Polypharmacy and Predicting Drug-Drug Interactions using Deep Generative Models on Multimodal Graphs

Feb 17, 2023
Nhat Khang Ngo, Truong Son Hy, Risi Kondor

Figure 1 for Modeling Polypharmacy and Predicting Drug-Drug Interactions using Deep Generative Models on Multimodal Graphs
Figure 2 for Modeling Polypharmacy and Predicting Drug-Drug Interactions using Deep Generative Models on Multimodal Graphs
Figure 3 for Modeling Polypharmacy and Predicting Drug-Drug Interactions using Deep Generative Models on Multimodal Graphs
Figure 4 for Modeling Polypharmacy and Predicting Drug-Drug Interactions using Deep Generative Models on Multimodal Graphs

Latent representations of drugs and their targets produced by contemporary graph autoencoder models have proved useful in predicting many types of node-pair interactions on large networks, including drug-drug, drug-target, and target-target interactions. However, most existing approaches model either the node's latent spaces in which node distributions are rigid or do not effectively capture the interrelations between drugs; these limitations hinder the methods from accurately predicting drug-pair interactions. In this paper, we present the effectiveness of variational graph autoencoders (VGAE) in modeling latent node representations on multimodal networks. Our approach can produce flexible latent spaces for each node type of the multimodal graph; the embeddings are used later for predicting links among node pairs under different edge types. To further enhance the models' performance, we suggest a new method that concatenates Morgan fingerprints, which capture the molecular structures of each drug, with their latent embeddings before preceding them to the decoding stage for link prediction. Our proposed model shows competitive results on three multimodal networks: (1) a multimodal graph consisting of drug and protein nodes, (2) a multimodal graph constructed from a subset of the DrugBank database involving drug nodes under different interaction types, and (3) a multimodal graph consisting of drug and cell line nodes. Our source code is publicly available at https://github.com/HySonLab/drug-interactions.

* arXiv admin note: substantial text overlap with arXiv:2209.09941 
Viaarxiv icon

On the Connection Between MPNN and Graph Transformer

Feb 03, 2023
Chen Cai, Truong Son Hy, Rose Yu, Yusu Wang

Figure 1 for On the Connection Between MPNN and Graph Transformer
Figure 2 for On the Connection Between MPNN and Graph Transformer
Figure 3 for On the Connection Between MPNN and Graph Transformer
Figure 4 for On the Connection Between MPNN and Graph Transformer

Graph Transformer (GT) recently has emerged as a new paradigm of graph learning algorithms, outperforming the previously popular Message Passing Neural Network (MPNN) on multiple benchmarks. Previous work (Kim et al., 2022) shows that with proper position embedding, GT can approximate MPNN arbitrarily well, implying that GT is at least as powerful as MPNN. In this paper, we study the inverse connection and show that MPNN with virtual node (VN), a commonly used heuristic with little theoretical understanding, is powerful enough to arbitrarily approximate the self-attention layer of GT. In particular, we first show that if we consider one type of linear transformer, the so-called Performer/Linear Transformer (Choromanski et al., 2020; Katharopoulos et al., 2020), then MPNN + VN with only O(1) depth and O(1) width can approximate a self-attention layer in Performer/Linear Transformer. Next, via a connection between MPNN + VN and DeepSets, we prove the MPNN + VN with O(n^d) width and O(1) depth can approximate the self-attention layer arbitrarily well, where d is the input feature dimension. Lastly, under some assumptions, we provide an explicit construction of MPNN + VN with O(1) width and O(n) depth approximating the self-attention layer in GT arbitrarily well. On the empirical side, we demonstrate that 1) MPNN + VN is a surprisingly strong baseline, outperforming GT on the recently proposed Long Range Graph Benchmark (LRGB) dataset, 2) our MPNN + VN improves over early implementation on a wide range of OGB datasets and 3) MPNN + VN outperforms Linear Transformer and MPNN on the climate modeling task.

Viaarxiv icon