Alert button
Picture for Zied Bouraoui

Zied Bouraoui

Alert button

What do Deck Chairs and Sun Hats Have in Common? Uncovering Shared Properties in Large Concept Vocabularies

Oct 23, 2023
Amit Gajbhiye, Zied Bouraoui, Na Li, Usashi Chatterjee, Luis Espinosa Anke, Steven Schockaert

Figure 1 for What do Deck Chairs and Sun Hats Have in Common? Uncovering Shared Properties in Large Concept Vocabularies
Figure 2 for What do Deck Chairs and Sun Hats Have in Common? Uncovering Shared Properties in Large Concept Vocabularies
Figure 3 for What do Deck Chairs and Sun Hats Have in Common? Uncovering Shared Properties in Large Concept Vocabularies
Figure 4 for What do Deck Chairs and Sun Hats Have in Common? Uncovering Shared Properties in Large Concept Vocabularies

Concepts play a central role in many applications. This includes settings where concepts have to be modelled in the absence of sentence context. Previous work has therefore focused on distilling decontextualised concept embeddings from language models. But concepts can be modelled from different perspectives, whereas concept embeddings typically mostly capture taxonomic structure. To address this issue, we propose a strategy for identifying what different concepts, from a potentially large concept vocabulary, have in common with others. We then represent concepts in terms of the properties they share with the other concepts. To demonstrate the practical usefulness of this way of modelling concepts, we consider the task of ultra-fine entity typing, which is a challenging multi-label classification problem. We show that by augmenting the label set with shared properties, we can improve the performance of the state-of-the-art models for this task.

* Accepted for EMNLP 2023 
Viaarxiv icon

Unified Model for Crystalline Material Generation

Jun 07, 2023
Astrid Klipfel, Yaël Frégier, Adlane Sayede, Zied Bouraoui

Figure 1 for Unified Model for Crystalline Material Generation
Figure 2 for Unified Model for Crystalline Material Generation
Figure 3 for Unified Model for Crystalline Material Generation
Figure 4 for Unified Model for Crystalline Material Generation

One of the greatest challenges facing our society is the discovery of new innovative crystal materials with specific properties. Recently, the problem of generating crystal materials has received increasing attention, however, it remains unclear to what extent, or in what way, we can develop generative models that consider both the periodicity and equivalence geometric of crystal structures. To alleviate this issue, we propose two unified models that act at the same time on crystal lattice and atomic positions using periodic equivariant architectures. Our models are capable to learn any arbitrary crystal lattice deformation by lowering the total energy to reach thermodynamic stability. Code and data are available at https://github.com/aklipf/GemsNet.

Viaarxiv icon

Ultra-Fine Entity Typing with Prior Knowledge about Labels: A Simple Clustering Based Strategy

May 22, 2023
Na Li, Zied Bouraoui, Steven Schockaert

Figure 1 for Ultra-Fine Entity Typing with Prior Knowledge about Labels: A Simple Clustering Based Strategy
Figure 2 for Ultra-Fine Entity Typing with Prior Knowledge about Labels: A Simple Clustering Based Strategy
Figure 3 for Ultra-Fine Entity Typing with Prior Knowledge about Labels: A Simple Clustering Based Strategy
Figure 4 for Ultra-Fine Entity Typing with Prior Knowledge about Labels: A Simple Clustering Based Strategy

Ultra-fine entity typing (UFET) is the task of inferring the semantic types, from a large set of fine-grained candidates, that apply to a given entity mention. This task is especially challenging because we only have a small number of training examples for many of the types, even with distant supervision strategies. State-of-the-art models, therefore, have to rely on prior knowledge about the type labels in some way. In this paper, we show that the performance of existing methods can be improved using a simple technique: we use pre-trained label embeddings to cluster the labels into semantic domains and then treat these domains as additional types. We show that this strategy consistently leads to improved results, as long as high-quality label embeddings are used. We furthermore use the label clusters as part of a simple post-processing technique, which results in further performance gains. Both strategies treat the UFET model as a black box and can thus straightforwardly be used to improve a wide range of existing models.

Viaarxiv icon

Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models

May 16, 2023
Na Li, Hanane Kteich, Zied Bouraoui, Steven Schockaert

Figure 1 for Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models
Figure 2 for Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models
Figure 3 for Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models
Figure 4 for Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models

Learning vectors that capture the meaning of concepts remains a fundamental challenge. Somewhat surprisingly, perhaps, pre-trained language models have thus far only enabled modest improvements to the quality of such concept embeddings. Current strategies for using language models typically represent a concept by averaging the contextualised representations of its mentions in some corpus. This is potentially sub-optimal for at least two reasons. First, contextualised word vectors have an unusual geometry, which hampers downstream tasks. Second, concept embeddings should capture the semantic properties of concepts, whereas contextualised word vectors are also affected by other factors. To address these issues, we propose two contrastive learning strategies, based on the view that whenever two sentences reveal similar properties, the corresponding contextualised vectors should also be similar. One strategy is fully unsupervised, estimating the properties which are expressed in a sentence from the neighbourhood structure of the contextualised word embeddings. The second strategy instead relies on a distant supervision signal from ConceptNet. Our experimental results show that the resulting vectors substantially outperform existing concept embeddings in predicting the semantic properties of concepts, with the ConceptNet-based strategy achieving the best results. These findings are furthermore confirmed in a clustering task and in the downstream task of ontology completion.

Viaarxiv icon

Equivariant Message Passing Neural Network for Crystal Material Discovery

Feb 01, 2023
Astrid Klipfel, Olivier Peltre, Najwa Harrati, Yaël Fregier, Adlane Sayede, Zied Bouraoui

Figure 1 for Equivariant Message Passing Neural Network for Crystal Material Discovery
Figure 2 for Equivariant Message Passing Neural Network for Crystal Material Discovery
Figure 3 for Equivariant Message Passing Neural Network for Crystal Material Discovery
Figure 4 for Equivariant Message Passing Neural Network for Crystal Material Discovery

Automatic material discovery with desired properties is a fundamental challenge for material sciences. Considerable attention has recently been devoted to generating stable crystal structures. While existing work has shown impressive success on supervised tasks such as property prediction, the progress on unsupervised tasks such as material generation is still hampered by the limited extent to which the equivalent geometric representations of the same crystal are considered. To address this challenge, we propose EMPNN a periodic equivariant message-passing neural network that learns crystal lattice deformation in an unsupervised fashion. Our model equivalently acts on lattice according to the deformation action that must be performed, making it suitable for crystal generation, relaxation and optimisation. We present experimental evaluations that demonstrate the effectiveness of our approach.

Viaarxiv icon

Region-Based Merging of Open-Domain Terminological Knowledge

May 06, 2022
Zied Bouraoui, Sebastien Konieczny, Thanh Ma, Nicolas Schwind, Ivan Varzinczak

Figure 1 for Region-Based Merging of Open-Domain Terminological Knowledge
Figure 2 for Region-Based Merging of Open-Domain Terminological Knowledge
Figure 3 for Region-Based Merging of Open-Domain Terminological Knowledge
Figure 4 for Region-Based Merging of Open-Domain Terminological Knowledge

This paper introduces a novel method for merging open-domain terminological knowledge. It takes advantage of the Region Connection Calculus (RCC5), a formalism used to represent regions in a topological space and to reason about their set-theoretic relationships. To this end, we first propose a faithful translation of terminological knowledge provided by several and potentially conflicting sources into region spaces. The merging is then performed on these spaces, and the result is translated back into the underlying language of the input sources. Our approach allows us to benefit from the expressivity and the flexibility of RCC5 while dealing with conflicting knowledge in a principled way.

* 13 pages, 19th International Conference on Principles of Knowledge Representation and Reasoning, KR'22 
Viaarxiv icon

Inferring Prototypes for Multi-Label Few-Shot Image Classification with Word Vector Guided Attention

Dec 07, 2021
Kun Yan, Chenbin Zhang, Jun Hou, Ping Wang, Zied Bouraoui, Shoaib Jameel, Steven Schockaert

Figure 1 for Inferring Prototypes for Multi-Label Few-Shot Image Classification with Word Vector Guided Attention
Figure 2 for Inferring Prototypes for Multi-Label Few-Shot Image Classification with Word Vector Guided Attention
Figure 3 for Inferring Prototypes for Multi-Label Few-Shot Image Classification with Word Vector Guided Attention
Figure 4 for Inferring Prototypes for Multi-Label Few-Shot Image Classification with Word Vector Guided Attention

Multi-label few-shot image classification (ML-FSIC) is the task of assigning descriptive labels to previously unseen images, based on a small number of training examples. A key feature of the multi-label setting is that images often have multiple labels, which typically refer to different regions of the image. When estimating prototypes, in a metric-based setting, it is thus important to determine which regions are relevant for which labels, but the limited amount of training data makes this highly challenging. As a solution, in this paper we propose to use word embeddings as a form of prior knowledge about the meaning of the labels. In particular, visual prototypes are obtained by aggregating the local feature maps of the support images, using an attention mechanism that relies on the label embeddings. As an important advantage, our model can infer prototypes for unseen labels without the need for fine-tuning any model parameters, which demonstrates its strong generalization abilities. Experiments on COCO and PASCAL VOC furthermore show that our model substantially improves the current state-of-the-art.

* Accepted by AAAI2022 
Viaarxiv icon

Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection

Jun 15, 2021
Yixiao Wang, Zied Bouraoui, Luis Espinosa Anke, Steven Schockaert

Figure 1 for Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection
Figure 2 for Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection
Figure 3 for Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection
Figure 4 for Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection

One of the long-standing challenges in lexical semantics consists in learning representations of words which reflect their semantic properties. The remarkable success of word embeddings for this purpose suggests that high-quality representations can be obtained by summarizing the sentence contexts of word mentions. In this paper, we propose a method for learning word representations that follows this basic strategy, but differs from standard word embeddings in two important ways. First, we take advantage of contextualized language models (CLMs) rather than bags of word vectors to encode contexts. Second, rather than learning a word vector directly, we use a topic model to partition the contexts in which words appear, and then learn different topic-specific vectors for each word. Finally, we use a task-specific supervision signal to make a soft selection of the resulting vectors. We show that this simple strategy leads to high-quality word vectors, which are more predictive of semantic properties than word embeddings and existing CLM-based strategies.

Viaarxiv icon

Aligning Visual Prototypes with BERT Embeddings for Few-Shot Learning

May 21, 2021
Kun Yan, Zied Bouraoui, Ping Wang, Shoaib Jameel, Steven Schockaert

Figure 1 for Aligning Visual Prototypes with BERT Embeddings for Few-Shot Learning
Figure 2 for Aligning Visual Prototypes with BERT Embeddings for Few-Shot Learning
Figure 3 for Aligning Visual Prototypes with BERT Embeddings for Few-Shot Learning
Figure 4 for Aligning Visual Prototypes with BERT Embeddings for Few-Shot Learning

Few-shot learning (FSL) is the task of learning to recognize previously unseen categories of images from a small number of training examples. This is a challenging task, as the available examples may not be enough to unambiguously determine which visual features are most characteristic of the considered categories. To alleviate this issue, we propose a method that additionally takes into account the names of the image classes. While the use of class names has already been explored in previous work, our approach differs in two key aspects. First, while previous work has aimed to directly predict visual prototypes from word embeddings, we found that better results can be obtained by treating visual and text-based prototypes separately. Second, we propose a simple strategy for learning class name embeddings using the BERT language model, which we found to substantially outperform the GloVe vectors that were used in previous work. We furthermore propose a strategy for dealing with the high dimensionality of these vectors, inspired by models for aligning cross-lingual word embeddings. We provide experiments on miniImageNet, CUB and tieredImageNet, showing that our approach consistently improves the state-of-the-art in metric-based FSL.

* Accepted by ICMR2021 
Viaarxiv icon