Alert button
Picture for Hoang Thanh Lam

Hoang Thanh Lam

Alert button

Zshot: An Open-source Framework for Zero-Shot Named Entity Recognition and Relation Extraction

Jul 25, 2023
Gabriele Picco, Marcos Martínez Galindo, Alberto Purpura, Leopold Fuchs, Vanessa López, Hoang Thanh Lam

Figure 1 for Zshot: An Open-source Framework for Zero-Shot Named Entity Recognition and Relation Extraction
Figure 2 for Zshot: An Open-source Framework for Zero-Shot Named Entity Recognition and Relation Extraction
Figure 3 for Zshot: An Open-source Framework for Zero-Shot Named Entity Recognition and Relation Extraction
Figure 4 for Zshot: An Open-source Framework for Zero-Shot Named Entity Recognition and Relation Extraction

The Zero-Shot Learning (ZSL) task pertains to the identification of entities or relations in texts that were not seen during training. ZSL has emerged as a critical research area due to the scarcity of labeled data in specific domains, and its applications have grown significantly in recent years. With the advent of large pretrained language models, several novel methods have been proposed, resulting in substantial improvements in ZSL performance. There is a growing demand, both in the research community and industry, for a comprehensive ZSL framework that facilitates the development and accessibility of the latest methods and pretrained models.In this study, we propose a novel ZSL framework called Zshot that aims to address the aforementioned challenges. Our primary objective is to provide a platform that allows researchers to compare different state-of-the-art ZSL methods with standard benchmark datasets. Additionally, we have designed our framework to support the industry with readily available APIs for production under the standard SpaCy NLP pipeline. Our API is extendible and evaluable, moreover, we include numerous enhancements such as boosting the accuracy with pipeline ensembling and visualization utilities available as a SpaCy extension.

* Association for Computational Linguistics. 3 (2023) 357-368  
* Accepted at ACL 2023 
Viaarxiv icon

Otter-Knowledge: benchmarks of multimodal knowledge graph representation learning from different sources for drug discovery

Jun 23, 2023
Hoang Thanh Lam, Marco Luca Sbodio, Marcos Martínez Galindo, Mykhaylo Zayats, Raúl Fernández-Díaz, Víctor Valls, Gabriele Picco, Cesar Berrospi Ramis, Vanessa López

Figure 1 for Otter-Knowledge: benchmarks of multimodal knowledge graph representation learning from different sources for drug discovery
Figure 2 for Otter-Knowledge: benchmarks of multimodal knowledge graph representation learning from different sources for drug discovery
Figure 3 for Otter-Knowledge: benchmarks of multimodal knowledge graph representation learning from different sources for drug discovery
Figure 4 for Otter-Knowledge: benchmarks of multimodal knowledge graph representation learning from different sources for drug discovery

Recent research in representation learning utilizes large databases of proteins or molecules to acquire knowledge of drug and protein structures through unsupervised learning techniques. These pre-trained representations have proven to significantly enhance the accuracy of subsequent tasks, such as predicting the affinity between drugs and target proteins. In this study, we demonstrate that by incorporating knowledge graphs from diverse sources and modalities into the sequences or SMILES representation, we can further enrich the representation and achieve state-of-the-art results on established benchmark datasets. We provide preprocessed and integrated data obtained from 7 public sources, which encompass over 30M triples. Additionally, we make available the pre-trained models based on this data, along with the reported outcomes of their performance on three widely-used benchmark datasets for drug-target binding affinity prediction found in the Therapeutic Data Commons (TDC) benchmarks. Additionally, we make the source code for training models on benchmark datasets publicly available. Our objective in releasing these pre-trained models, accompanied by clean data for model pretraining and benchmark results, is to encourage research in knowledge-enhanced representation learning.

Viaarxiv icon

Evaluating Robustness of Cooperative MARL: A Model-based Approach

Feb 07, 2022
Nhan H. Pham, Lam M. Nguyen, Jie Chen, Hoang Thanh Lam, Subhro Das, Tsui-Wei Weng

In recent years, a proliferation of methods were developed for cooperative multi-agent reinforcement learning (c-MARL). However, the robustness of c-MARL agents against adversarial attacks has been rarely explored. In this paper, we propose to evaluate the robustness of c-MARL agents via a model-based approach. Our proposed formulation can craft stronger adversarial state perturbations of c-MARL agents(s) to lower total team rewards more than existing model-free approaches. In addition, we propose the first victim-agent selection strategy which allows us to develop even stronger adversarial attack. Numerical experiments on multi-agent MuJoCo benchmarks illustrate the advantage of our approach over other baselines. The proposed model-based attack consistently outperforms other baselines in all tested environments.

Viaarxiv icon

Ensembling Graph Predictions for AMR Parsing

Oct 18, 2021
Hoang Thanh Lam, Gabriele Picco, Yufang Hou, Young-Suk Lee, Lam M. Nguyen, Dzung T. Phan, Vanessa López, Ramon Fernandez Astudillo

Figure 1 for Ensembling Graph Predictions for AMR Parsing
Figure 2 for Ensembling Graph Predictions for AMR Parsing
Figure 3 for Ensembling Graph Predictions for AMR Parsing
Figure 4 for Ensembling Graph Predictions for AMR Parsing

In many machine learning tasks, models are trained to predict structure data such as graphs. For example, in natural language processing, it is very common to parse texts into dependency trees or abstract meaning representation (AMR) graphs. On the other hand, ensemble methods combine predictions from multiple models to create a new one that is more robust and accurate than individual predictions. In the literature, there are many ensembling techniques proposed for classification or regression problems, however, ensemble graph prediction has not been studied thoroughly. In this work, we formalize this problem as mining the largest graph that is the most supported by a collection of graph predictions. As the problem is NP-Hard, we propose an efficient heuristic algorithm to approximate the optimal solution. To validate our approach, we carried out experiments in AMR parsing problems. The experimental results demonstrate that the proposed approach can combine the strength of state-of-the-art AMR parsers to create new predictions that are more accurate than any individual models in five standard benchmark datasets.

* Accepted at NeurIPS 2021 
Viaarxiv icon

Neural Unification for Logic Reasoning over Natural Language

Sep 17, 2021
Gabriele Picco, Hoang Thanh Lam, Marco Luca Sbodio, Vanessa Lopez Garcia

Figure 1 for Neural Unification for Logic Reasoning over Natural Language
Figure 2 for Neural Unification for Logic Reasoning over Natural Language
Figure 3 for Neural Unification for Logic Reasoning over Natural Language
Figure 4 for Neural Unification for Logic Reasoning over Natural Language

Automated Theorem Proving (ATP) deals with the development of computer programs being able to show that some conjectures (queries) are a logical consequence of a set of axioms (facts and rules). There exists several successful ATPs where conjectures and axioms are formally provided (e.g. formalised as First Order Logic formulas). Recent approaches, such as (Clark et al., 2020), have proposed transformer-based architectures for deriving conjectures given axioms expressed in natural language (English). The conjecture is verified through a binary text classifier, where the transformers model is trained to predict the truth value of a conjecture given the axioms. The RuleTaker approach of (Clark et al., 2020) achieves appealing results both in terms of accuracy and in the ability to generalize, showing that when the model is trained with deep enough queries (at least 3 inference steps), the transformers are able to correctly answer the majority of queries (97.6%) that require up to 5 inference steps. In this work we propose a new architecture, namely the Neural Unifier, and a relative training procedure, which achieves state-of-the-art results in term of generalisation, showing that mimicking a well-known inference procedure, the backward chaining, it is possible to answer deep queries even when the model is trained only on shallow ones. The approach is demonstrated in experiments using a diverse set of benchmark data.

* Accepted at EMNLP2021 Findings 
Viaarxiv icon

Neural Feature Learning From Relational Database

Jun 17, 2018
Hoang Thanh Lam, Tran Ngoc Minh, Mathieu Sinn, Beat Buesser, Martin Wistuba

Figure 1 for Neural Feature Learning From Relational Database
Figure 2 for Neural Feature Learning From Relational Database
Figure 3 for Neural Feature Learning From Relational Database
Figure 4 for Neural Feature Learning From Relational Database

Feature engineering is one of the most important but most tedious tasks in data science. This work studies automation of feature learning from relational database. We first prove theoretically that finding the optimal features from relational data for predictive tasks is NP-hard. We propose an efficient rule-based approach based on heuristics and a deep neural network to automatically learn appropriate features from relational data. We benchmark our approaches in ensembles in past Kaggle competitions. Our new approach wins late medals and beats the state-of-the-art solutions with significant margins. To the best of our knowledge, this is the first time an automated data science system could win medals in Kaggle competitions with complex relational database.

Viaarxiv icon

Automated Image Data Preprocessing with Deep Reinforcement Learning

Jun 15, 2018
Tran Ngoc Minh, Mathieu Sinn, Hoang Thanh Lam, Martin Wistuba

Figure 1 for Automated Image Data Preprocessing with Deep Reinforcement Learning
Figure 2 for Automated Image Data Preprocessing with Deep Reinforcement Learning
Figure 3 for Automated Image Data Preprocessing with Deep Reinforcement Learning
Figure 4 for Automated Image Data Preprocessing with Deep Reinforcement Learning

Data preparation, i.e. the process of transforming raw data into a format that can be used for training effective machine learning models, is a tedious and time-consuming task. For image data, preprocessing typically involves a sequence of basic transformations such as cropping, filtering, rotating or flipping images. Currently, data scientists decide manually based on their experience which transformations to apply in which particular order to a given image data set. Besides constituting a bottleneck in real-world data science projects, manual image data preprocessing may yield suboptimal results as data scientists need to rely on intuition or trial-and-error approaches when exploring the space of possible image transformations and thus might not be able to discover the most effective ones. To mitigate the inefficiency and potential ineffectiveness of manual data preprocessing, this paper proposes a deep reinforcement learning framework to automatically discover the optimal data preprocessing steps for training an image classifier. The framework takes as input sets of labeled images and predefined preprocessing transformations. It jointly learns the classifier and the optimal preprocessing transformations for individual images. Experimental results show that the proposed approach not only improves the accuracy of image classifiers, but also makes them substantially more robust to noisy inputs at test time.

Viaarxiv icon

Learning Correlation Space for Time Series

May 15, 2018
Han Qiu, Hoang Thanh Lam, Francesco Fusco, Mathieu Sinn

Figure 1 for Learning Correlation Space for Time Series
Figure 2 for Learning Correlation Space for Time Series
Figure 3 for Learning Correlation Space for Time Series
Figure 4 for Learning Correlation Space for Time Series

We propose an approximation algorithm for efficient correlation search in time series data. In our method, we use Fourier transform and neural network to embed time series into a low-dimensional Euclidean space. The given space is learned such that time series correlation can be effectively approximated from Euclidean distance between corresponding embedded vectors. Therefore, search for correlated time series can be done using an index in the embedding space for efficient nearest neighbor search. Our theoretical analysis illustrates that our method's accuracy can be guaranteed under certain regularity conditions. We further conduct experiments on real-world datasets and the results show that our method indeed outperforms the baseline solution. In particular, for approximation of correlation, our method reduces the approximation loss by a half in most test cases compared to the baseline solution. For top-$k$ highest correlation search, our method improves the precision from 5\% to 20\% while the query time is similar to the baseline approach query time.

Viaarxiv icon

One button machine for automating feature engineering in relational databases

Jun 01, 2017
Hoang Thanh Lam, Johann-Michael Thiebaut, Mathieu Sinn, Bei Chen, Tiep Mai, Oznur Alkan

Figure 1 for One button machine for automating feature engineering in relational databases
Figure 2 for One button machine for automating feature engineering in relational databases
Figure 3 for One button machine for automating feature engineering in relational databases
Figure 4 for One button machine for automating feature engineering in relational databases

Feature engineering is one of the most important and time consuming tasks in predictive analytics projects. It involves understanding domain knowledge and data exploration to discover relevant hand-crafted features from raw data. In this paper, we introduce a system called One Button Machine, or OneBM for short, which automates feature discovery in relational databases. OneBM automatically performs a key activity of data scientists, namely, joining of database tables and applying advanced data transformations to extract useful features from data. We validated OneBM in Kaggle competitions in which OneBM achieved performance as good as top 16% to 24% data scientists in three Kaggle competitions. More importantly, OneBM outperformed the state-of-the-art system in a Kaggle competition in terms of prediction accuracy and ranking on Kaggle leaderboard. The results show that OneBM can be useful for both data scientists and non-experts. It helps data scientists reduce data exploration time allowing them to try and error many ideas in short time. On the other hand, it enables non-experts, who are not familiar with data science, to quickly extract value from their data with a little effort, time and cost.

Viaarxiv icon

(Blue) Taxi Destination and Trip Time Prediction from Partial Trajectories

Sep 17, 2015
Hoang Thanh Lam, Ernesto Diaz-Aviles, Alessandra Pascale, Yiannis Gkoufas, Bei Chen

Figure 1 for (Blue) Taxi Destination and Trip Time Prediction from Partial Trajectories
Figure 2 for (Blue) Taxi Destination and Trip Time Prediction from Partial Trajectories
Figure 3 for (Blue) Taxi Destination and Trip Time Prediction from Partial Trajectories
Figure 4 for (Blue) Taxi Destination and Trip Time Prediction from Partial Trajectories

Real-time estimation of destination and travel time for taxis is of great importance for existing electronic dispatch systems. We present an approach based on trip matching and ensemble learning, in which we leverage the patterns observed in a dataset of roughly 1.7 million taxi journeys to predict the corresponding final destination and travel time for ongoing taxi trips, as a solution for the ECML/PKDD Discovery Challenge 2015 competition. The results of our empirical evaluation show that our approach is effective and very robust, which led our team -- BlueTaxi -- to the 3rd and 7th position of the final rankings for the trip time and destination prediction tasks, respectively. Given the fact that the final rankings were computed using a very small test set (with only 320 trips) we believe that our approach is one of the most robust solutions for the challenge based on the consistency of our good results across the test sets.

* ECML/PKDD Discovery Challenge 2015 
Viaarxiv icon