Alert button
Picture for Long Tran-Thanh

Long Tran-Thanh

Alert button

Examining the Effects of Degree Distribution and Homophily in Graph Learning Models

Jul 17, 2023
Mustafa Yasir, John Palowitch, Anton Tsitsulin, Long Tran-Thanh, Bryan Perozzi

Figure 1 for Examining the Effects of Degree Distribution and Homophily in Graph Learning Models
Figure 2 for Examining the Effects of Degree Distribution and Homophily in Graph Learning Models
Figure 3 for Examining the Effects of Degree Distribution and Homophily in Graph Learning Models
Figure 4 for Examining the Effects of Degree Distribution and Homophily in Graph Learning Models

Despite a surge in interest in GNN development, homogeneity in benchmarking datasets still presents a fundamental issue to GNN research. GraphWorld is a recent solution which uses the Stochastic Block Model (SBM) to generate diverse populations of synthetic graphs for benchmarking any GNN task. Despite its success, the SBM imposed fundamental limitations on the kinds of graph structure GraphWorld could create. In this work we examine how two additional synthetic graph generators can improve GraphWorld's evaluation; LFR, a well-established model in the graph clustering literature and CABAM, a recent adaptation of the Barabasi-Albert model tailored for GNN benchmarking. By integrating these generators, we significantly expand the coverage of graph space within the GraphWorld framework while preserving key graph properties observed in real-world networks. To demonstrate their effectiveness, we generate 300,000 graphs to benchmark 11 GNN models on a node classification task. We find GNN performance variations in response to homophily, degree distribution and feature signal. Based on these findings, we classify models by their sensitivity to the new generators under these properties. Additionally, we release the extensions made to GraphWorld on the GitHub repository, offering further evaluation of GNN performance on new graphs.

* Accepted to Workshop on Graph Learning Benchmarks at KDD 2023 
Viaarxiv icon

Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study

May 12, 2023
Viet Bach Nguyen, Truong Son Hy, Long Tran-Thanh, Nhung Nghiem

Figure 1 for Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study
Figure 2 for Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study
Figure 3 for Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study
Figure 4 for Predicting COVID-19 pandemic by spatio-temporal graph neural networks: A New Zealand's study

Modeling and simulations of pandemic dynamics play an essential role in understanding and addressing the spreading of highly infectious diseases such as COVID-19. In this work, we propose a novel deep learning architecture named Attention-based Multiresolution Graph Neural Networks (ATMGNN) that learns to combine the spatial graph information, i.e. geographical data, with the temporal information, i.e. timeseries data of number of COVID-19 cases, to predict the future dynamics of the pandemic. The key innovation is that our method can capture the multiscale structures of the spatial graph via a learning to cluster algorithm in a data-driven manner. This allows our architecture to learn to pick up either local or global signals of a pandemic, and model both the long-range spatial and temporal dependencies. Importantly, we collected and assembled a new dataset for New Zealand. We established a comprehensive benchmark of statistical methods, temporal architectures, graph neural networks along with our spatio-temporal model. We also incorporated socioeconomic cross-sectional data to further enhance our prediction. Our proposed model have shown highly robust predictions and outperformed all other baselines in various metrics for our new dataset of New Zealand along with existing datasets of England, France, Italy and Spain. For a future work, we plan to extend our work for real-time prediction and global scale. Our data and source code are publicly available at https://github.com/HySonLab/pandemic_tgnn

Viaarxiv icon

Achieving Better Regret against Strategic Adversaries

Feb 13, 2023
Le Cong Dinh, Tri-Dung Nguyen, Alain Zemkoho, Long Tran-Thanh

Figure 1 for Achieving Better Regret against Strategic Adversaries
Figure 2 for Achieving Better Regret against Strategic Adversaries
Figure 3 for Achieving Better Regret against Strategic Adversaries
Figure 4 for Achieving Better Regret against Strategic Adversaries

We study online learning problems in which the learner has extra knowledge about the adversary's behaviour, i.e., in game-theoretic settings where opponents typically follow some no-external regret learning algorithms. Under this assumption, we propose two new online learning algorithms, Accurate Follow the Regularized Leader (AFTRL) and Prod-Best Response (Prod-BR), that intensively exploit this extra knowledge while maintaining the no-regret property in the worst-case scenario of having inaccurate extra information. Specifically, AFTRL achieves $O(1)$ external regret or $O(1)$ \emph{forward regret} against no-external regret adversary in comparison with $O(\sqrt{T})$ \emph{dynamic regret} of Prod-BR. To the best of our knowledge, our algorithm is the first to consider forward regret that achieves $O(1)$ regret against strategic adversaries. When playing zero-sum games with Accurate Multiplicative Weights Update (AMWU), a special case of AFTRL, we achieve \emph{last round convergence} to the Nash Equilibrium. We also provide numerical experiments to further support our theoretical results. In particular, we demonstrate that our methods achieve significantly better regret bounds and rate of last round convergence, compared to the state of the art (e.g., Multiplicative Weights Update (MWU) and its optimistic counterpart, OMWU).

Viaarxiv icon

Invariant Lipschitz Bandits: A Side Observation Approach

Dec 14, 2022
Nam Phuong Tran, The-Anh Ta, Long Tran-Thanh

Symmetry arises in many optimization and decision-making problems, and has attracted considerable attention from the optimization community: By utilizing the existence of such symmetries, the process of searching for optimal solutions can be improved significantly. Despite its success in (offline) optimization, the utilization of symmetries has not been well examined within the online optimization settings, especially in the bandit literature. As such, in this paper we study the invariant Lipschitz bandit setting, a subclass of the Lipschitz bandits where the reward function and the set of arms are preserved under a group of transformations. We introduce an algorithm named \texttt{UniformMesh-N}, which naturally integrates side observations using group orbits into the \texttt{UniformMesh} algorithm (\cite{Kleinberg2005_UniformMesh}), which uniformly discretizes the set of arms. Using the side-observation approach, we prove an improved regret upper bound, which depends on the cardinality of the group, given that the group is finite. We also prove a matching regret's lower bound for the invariant Lipschitz bandit class (up to logarithmic factors). We hope that our work will ignite further investigation of symmetry in bandit theory and sequential decision-making theory in general.

Viaarxiv icon

Multi-Player Bandits Robust to Adversarial Collisions

Nov 15, 2022
Shivakumar Mahesh, Anshuka Rangi, Haifeng Xu, Long Tran-Thanh

Figure 1 for Multi-Player Bandits Robust to Adversarial Collisions
Figure 2 for Multi-Player Bandits Robust to Adversarial Collisions
Figure 3 for Multi-Player Bandits Robust to Adversarial Collisions
Figure 4 for Multi-Player Bandits Robust to Adversarial Collisions

Motivated by cognitive radios, stochastic Multi-Player Multi-Armed Bandits has been extensively studied in recent years. In this setting, each player pulls an arm, and receives a reward corresponding to the arm if there is no collision, namely the arm was selected by one single player. Otherwise, the player receives no reward if collision occurs. In this paper, we consider the presence of malicious players (or attackers) who obstruct the cooperative players (or defenders) from maximizing their rewards, by deliberately colliding with them. We provide the first decentralized and robust algorithm RESYNC for defenders whose performance deteriorates gracefully as $\tilde{O}(C)$ as the number of collisions $C$ from the attackers increases. We show that this algorithm is order-optimal by proving a lower bound which scales as $\Omega(C)$. This algorithm is agnostic to the algorithm used by the attackers and agnostic to the number of collisions $C$ faced from attackers.

Viaarxiv icon

Label driven Knowledge Distillation for Federated Learning with non-IID Data

Sep 30, 2022
Minh-Duong Nguyen, Quoc-Viet Pham, Dinh Thai Hoang, Long Tran-Thanh, Diep N. Nguyen, Won-Joo Hwang

Figure 1 for Label driven Knowledge Distillation for Federated Learning with non-IID Data
Figure 2 for Label driven Knowledge Distillation for Federated Learning with non-IID Data
Figure 3 for Label driven Knowledge Distillation for Federated Learning with non-IID Data
Figure 4 for Label driven Knowledge Distillation for Federated Learning with non-IID Data

In real-world applications, Federated Learning (FL) meets two challenges: (1) scalability, especially when applied to massive IoT networks; and (2) how to be robust against an environment with heterogeneous data. Realizing the first problem, we aim to design a novel FL framework named Full-stack FL (F2L). More specifically, F2L utilizes a hierarchical network architecture, making extending the FL network accessible without reconstructing the whole network system. Moreover, leveraging the advantages of hierarchical network design, we propose a new label-driven knowledge distillation (LKD) technique at the global server to address the second problem. As opposed to current knowledge distillation techniques, LKD is capable of training a student model, which consists of good knowledge from all teachers' models. Therefore, our proposed algorithm can effectively extract the knowledge of the regions' data distribution (i.e., the regional aggregated models) to reduce the divergence between clients' models when operating under the FL system with non-independent identically distributed data. Extensive experiment results reveal that: (i) our F2L method can significantly improve the overall FL efficiency in all global distillations, and (ii) F2L rapidly achieves convergence as global distillation stages occur instead of increasing on each communication cycle.

* 28 pages, 5 figures, 10 tables 
Viaarxiv icon

Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning

Aug 29, 2022
Anshuka Rangi, Haifeng Xu, Long Tran-Thanh, Massimo Franceschetti

Figure 1 for Understanding the Limits of Poisoning Attacks in Episodic Reinforcement Learning

To understand the security threats to reinforcement learning (RL) algorithms, this paper studies poisoning attacks to manipulate \emph{any} order-optimal learning algorithm towards a targeted policy in episodic RL and examines the potential damage of two natural types of poisoning attacks, i.e., the manipulation of \emph{reward} and \emph{action}. We discover that the effect of attacks crucially depend on whether the rewards are bounded or unbounded. In bounded reward settings, we show that only reward manipulation or only action manipulation cannot guarantee a successful attack. However, by combining reward and action manipulation, the adversary can manipulate any order-optimal learning algorithm to follow any targeted policy with $\tilde{\Theta}(\sqrt{T})$ total attack cost, which is order-optimal, without any knowledge of the underlying MDP. In contrast, in unbounded reward settings, we show that reward manipulation attacks are sufficient for an adversary to successfully manipulate any order-optimal learning algorithm to follow any targeted policy using $\tilde{O}(\sqrt{T})$ amount of contamination. Our results reveal useful insights about what can or cannot be achieved by poisoning attacks, and are set to spur more works on the design of robust RL algorithms.

* Accepted at International Joint Conferences on Artificial Intelligence (IJCAI) 2022 
Viaarxiv icon

Temporal Multiresolution Graph Neural Networks For Epidemic Prediction

Jun 01, 2022
Truong Son Hy, Viet Bach Nguyen, Long Tran-Thanh, Risi Kondor

Figure 1 for Temporal Multiresolution Graph Neural Networks For Epidemic Prediction
Figure 2 for Temporal Multiresolution Graph Neural Networks For Epidemic Prediction
Figure 3 for Temporal Multiresolution Graph Neural Networks For Epidemic Prediction
Figure 4 for Temporal Multiresolution Graph Neural Networks For Epidemic Prediction

In this paper, we introduce Temporal Multiresolution Graph Neural Networks (TMGNN), the first architecture that both learns to construct the multiscale and multiresolution graph structures and incorporates the time-series signals to capture the temporal changes of the dynamic graphs. We have applied our proposed model to the task of predicting future spreading of epidemic and pandemic based on the historical time-series data collected from the actual COVID-19 pandemic and chickenpox epidemic in several European countries, and have obtained competitive results in comparison to other previous state-of-the-art temporal architectures and graph learning algorithms. We have shown that capturing the multiscale and multiresolution structures of graphs is important to extract either local or global information that play a critical role in understanding the dynamic of a global pandemic such as COVID-19 which started from a local city and spread to the whole world. Our work brings a promising research direction in forecasting and mitigating future epidemics and pandemics.

Viaarxiv icon

Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical Reinforcement Learning

Oct 20, 2021
Thai Le, Long Tran-Thanh, Dongwon Lee

Figure 1 for Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical Reinforcement Learning
Figure 2 for Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical Reinforcement Learning
Figure 3 for Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical Reinforcement Learning
Figure 4 for Adversarial Socialbot Learning via Multi-Agent Deep Hierarchical Reinforcement Learning

Socialbots are software-driven user accounts on social platforms, acting autonomously (mimicking human behavior), with the aims to influence the opinions of other users or spread targeted misinformation for particular goals. As socialbots undermine the ecosystem of social platforms, they are often considered harmful. As such, there have been several computational efforts to auto-detect the socialbots. However, to our best knowledge, the adversarial nature of these socialbots has not yet been studied. This begs a question "can adversaries, controlling socialbots, exploit AI techniques to their advantage?" To this question, we successfully demonstrate that indeed it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected. We first formulate the adversarial socialbot learning as a cooperative game between two functional hierarchical RL agents. While one agent curates a sequence of activities that can avoid the detection, the other agent aims to maximize network influence by selectively connecting with right users. Our proposed policy networks train with a vast amount of synthetic graphs and generalize better than baselines on unseen real-life graphs both in terms of maximizing network influence (up to +18%) and sustainable stealthiness (up to +40% undetectability) under a strong bot detector (with 90% detection accuracy). During inference, the complexity of our approach scales linearly, independent of a network's structure and the virality of news. This makes our approach a practical adversarial attack when deployed in a real-life setting.

Viaarxiv icon