Alert button
Picture for Marianna Apidianaki

Marianna Apidianaki

Alert button

Representation Of Lexical Stylistic Features In Language Models' Embedding Space

May 31, 2023
Qing Lyu, Marianna Apidianaki, Chris Callison-Burch

Figure 1 for Representation Of Lexical Stylistic Features In Language Models' Embedding Space
Figure 2 for Representation Of Lexical Stylistic Features In Language Models' Embedding Space
Figure 3 for Representation Of Lexical Stylistic Features In Language Models' Embedding Space
Figure 4 for Representation Of Lexical Stylistic Features In Language Models' Embedding Space

The representation space of pretrained Language Models (LMs) encodes rich information about words and their relationships (e.g., similarity, hypernymy, polysemy) as well as abstract semantic notions (e.g., intensity). In this paper, we demonstrate that lexical stylistic notions such as complexity, formality, and figurativeness, can also be identified in this space. We show that it is possible to derive a vector representation for each of these stylistic notions from only a small number of seed pairs. Using these vectors, we can characterize new texts in terms of these dimensions by performing simple calculations in the corresponding embedding space. We conduct experiments on five datasets and find that static embeddings encode these features more accurately at the level of words and phrases, whereas contextualized LMs perform better on sentences. The lower performance of contextualized representations at the word level is partially attributable to the anisotropy of their vector space, which can be corrected to some extent using techniques like standardization.

* Accepted at *SEM 2023 
Viaarxiv icon

I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors

May 24, 2023
Tuhin Chakrabarty, Arkadiy Saakyan, Olivia Winn, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, Smaranda Muresan

Figure 1 for I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors
Figure 2 for I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors
Figure 3 for I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors
Figure 4 for I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors

Visual metaphors are powerful rhetorical devices used to persuade or communicate creative ideas through images. Similar to linguistic metaphors, they convey meaning implicitly through symbolism and juxtaposition of the symbols. We propose a new task of generating visual metaphors from linguistic metaphors. This is a challenging task for diffusion-based text-to-image models, such as DALL$\cdot$E 2, since it requires the ability to model implicit meaning and compositionality. We propose to solve the task through the collaboration between Large Language Models (LLMs) and Diffusion Models: Instruct GPT-3 (davinci-002) with Chain-of-Thought prompting generates text that represents a visual elaboration of the linguistic metaphor containing the implicit meaning and relevant objects, which is then used as input to the diffusion-based text-to-image models.Using a human-AI collaboration framework, where humans interact both with the LLM and the top-performing diffusion model, we create a high-quality dataset containing 6,476 visual metaphors for 1,540 linguistic metaphors and their associated visual elaborations. Evaluation by professional illustrators shows the promise of LLM-Diffusion Model collaboration for this task.To evaluate the utility of our Human-AI collaboration framework and the quality of our dataset, we perform both an intrinsic human-based evaluation and an extrinsic evaluation using visual entailment as a downstream task.

* ACL 2023 (Findings) 
Viaarxiv icon

Explanation-based Finetuning Makes Models More Robust to Spurious Cues

May 08, 2023
Josh Magnus Ludan, Yixuan Meng, Tai Nguyen, Saurabh Shah, Qing Lyu, Marianna Apidianaki, Chris Callison-Burch

Figure 1 for Explanation-based Finetuning Makes Models More Robust to Spurious Cues
Figure 2 for Explanation-based Finetuning Makes Models More Robust to Spurious Cues
Figure 3 for Explanation-based Finetuning Makes Models More Robust to Spurious Cues
Figure 4 for Explanation-based Finetuning Makes Models More Robust to Spurious Cues

Large Language Models (LLMs) are so powerful that they sometimes learn correlations between labels and features that are irrelevant to the task, leading to poor generalization on out-of-distribution data. We propose explanation-based finetuning as a novel and general approach to mitigate LLMs' reliance on spurious correlations. Unlike standard finetuning where the model only predicts the answer given the input, we finetune the model to additionally generate a free-text explanation supporting its answer. To evaluate our method, we finetune the model on artificially constructed training sets containing different types of spurious cues, and test it on a test set without these cues. Compared to standard finetuning, our method makes models remarkably more robust against spurious cues in terms of accuracy drop across four classification tasks: ComVE (+1.2), CREAK (+9.1), e-SNLI (+15.4), and SBIC (+6.5). Moreover, our method works equally well with explanations generated by the model, implying its applicability to more datasets without human-written explanations.

Viaarxiv icon

Faithful Chain-of-Thought Reasoning

Feb 01, 2023
Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, Chris Callison-Burch

Figure 1 for Faithful Chain-of-Thought Reasoning
Figure 2 for Faithful Chain-of-Thought Reasoning
Figure 3 for Faithful Chain-of-Thought Reasoning
Figure 4 for Faithful Chain-of-Thought Reasoning

While Chain-of-Thought (CoT) prompting boosts Language Models' (LM) performance on a gamut of complex reasoning tasks, the generated reasoning chain does not necessarily reflect how the model arrives at the answer (aka. faithfulness). We propose Faithful CoT, a faithful-by-construction framework that decomposes a reasoning task into two stages: Translation (Natural Language query $\rightarrow$ symbolic reasoning chain) and Problem Solving (reasoning chain $\rightarrow$ answer), using an LM and a deterministic solver respectively. We demonstrate the efficacy of our approach on 10 reasoning datasets from 4 diverse domains. It outperforms traditional CoT prompting on 9 out of the 10 datasets, with an average accuracy gain of 4.4 on Math Word Problems, 1.9 on Planning, 4.0 on Multi-hop Question Answering (QA), and 18.1 on Logical Inference, under greedy decoding. Together with self-consistency decoding, we achieve new state-of-the-art few-shot performance on 7 out of the 10 datasets, showing a strong synergy between faithfulness and accuracy.

Viaarxiv icon

Visualizing the Obvious: A Concreteness-based Ensemble Model for Noun Property Prediction

Oct 24, 2022
Yue Yang, Artemis Panagopoulou, Marianna Apidianaki, Mark Yatskar, Chris Callison-Burch

Figure 1 for Visualizing the Obvious: A Concreteness-based Ensemble Model for Noun Property Prediction
Figure 2 for Visualizing the Obvious: A Concreteness-based Ensemble Model for Noun Property Prediction
Figure 3 for Visualizing the Obvious: A Concreteness-based Ensemble Model for Noun Property Prediction
Figure 4 for Visualizing the Obvious: A Concreteness-based Ensemble Model for Noun Property Prediction

Neural language models encode rich knowledge about entities and their relationships which can be extracted from their representations using probing. Common properties of nouns (e.g., red strawberries, small ant) are, however, more challenging to extract compared to other types of knowledge because they are rarely explicitly stated in texts. We hypothesize this to mainly be the case for perceptual properties which are obvious to the participants in the communication. We propose to extract these properties from images and use them in an ensemble model, in order to complement the information that is extracted from language models. We consider perceptual properties to be more concrete than abstract properties (e.g., interesting, flawless). We propose to use the adjectives' concreteness score as a lever to calibrate the contribution of each source (text vs. images). We evaluate our ensemble model in a ranking task where the actual properties of a noun need to be ranked higher than other non-relevant properties. Our results show that the proposed combination of text and images greatly improves noun property prediction compared to powerful text-based language models.

* Findings of EMNLP 2022  
* Findings of EMNLP 2022; The first two authors contributed equally 
Viaarxiv icon

Towards Faithful Model Explanation in NLP: A Survey

Sep 22, 2022
Qing Lyu, Marianna Apidianaki, Chris Callison-Burch

Figure 1 for Towards Faithful Model Explanation in NLP: A Survey
Figure 2 for Towards Faithful Model Explanation in NLP: A Survey
Figure 3 for Towards Faithful Model Explanation in NLP: A Survey
Figure 4 for Towards Faithful Model Explanation in NLP: A Survey

End-to-end neural NLP architectures are notoriously difficult to understand, which gives rise to numerous efforts towards model explainability in recent years. An essential principle of model explanation is Faithfulness, i.e., an explanation should accurately represent the reasoning process behind the model's prediction. This survey first discusses the definition and evaluation of Faithfulness, as well as its significance for explainability. We then introduce the recent advances in faithful explanation by grouping approaches into five categories: similarity methods, analysis of model-internal structures, backpropagation-based methods, counterfactual intervention, and self-explanatory models. Each category will be illustrated with its representative studies, advantages, and shortcomings. Finally, we discuss all the above methods in terms of their common virtues and limitations, and reflect on future work directions towards faithful explainability. For researchers interested in studying interpretability, this survey will offer an accessible and comprehensive overview of the area, laying the basis for further exploration. For users hoping to better understand their own models, this survey will be an introductory manual helping with choosing the most suitable explanation method(s).

* 62 pages 
Viaarxiv icon

How Does Data Corruption Affect Natural Language Understanding Models? A Study on GLUE datasets

Jan 12, 2022
Aarne Talman, Marianna Apidianaki, Stergios Chatzikyriakidis, Jörg Tiedemann

Figure 1 for How Does Data Corruption Affect Natural Language Understanding Models? A Study on GLUE datasets
Figure 2 for How Does Data Corruption Affect Natural Language Understanding Models? A Study on GLUE datasets
Figure 3 for How Does Data Corruption Affect Natural Language Understanding Models? A Study on GLUE datasets
Figure 4 for How Does Data Corruption Affect Natural Language Understanding Models? A Study on GLUE datasets

A central question in natural language understanding (NLU) research is whether high performance demonstrates the models' strong reasoning capabilities. We present an extensive series of controlled experiments where pre-trained language models are exposed to data that have undergone specific corruption transformations. The transformations involve removing instances of specific word classes and often lead to non-sensical sentences. Our results show that performance remains high for most GLUE tasks when the models are fine-tuned or tested on corrupted data, suggesting that the models leverage other cues for prediction even in non-sensical contexts. Our proposed data transformations can be used as a diagnostic tool for assessing the extent to which a specific dataset constitutes a proper testbed for evaluating models' language understanding capabilities.

Viaarxiv icon

Is "my favorite new movie" my favorite movie? Probing the Understanding of Recursive Noun Phrases

Dec 15, 2021
Qing Lyu, Hua Zheng, Daoxin Li, Li Zhang, Marianna Apidianaki, Chris Callison-Burch

Figure 1 for Is "my favorite new movie" my favorite movie? Probing the Understanding of Recursive Noun Phrases
Figure 2 for Is "my favorite new movie" my favorite movie? Probing the Understanding of Recursive Noun Phrases
Figure 3 for Is "my favorite new movie" my favorite movie? Probing the Understanding of Recursive Noun Phrases
Figure 4 for Is "my favorite new movie" my favorite movie? Probing the Understanding of Recursive Noun Phrases

Recursive noun phrases (NPs) have interesting semantic properties. For example, "my favorite new movie" is not necessarily "my favorite movie", whereas "my new favorite movie" is. This is common sense to humans, yet it is unknown whether pre-trained language models have such knowledge. We introduce the Recursive Noun Phrase Challenge (RNPC), a challenge set targeting the understanding of recursive NPs. When evaluated on our dataset, state-of-the-art Transformer models only achieve around chance performance. Still, we show that such knowledge is learnable with appropriate data. We further probe the models for relevant linguistic features that can be learned from our tasks, including modifier semantic category and modifier scope. Finally, models trained on RNPC achieve strong zero-shot performance on an extrinsic Harm Detection task, showing the usefulness of the understanding of recursive NPs in downstream applications. All code and data will be released at https://github.com/veronica320/Recursive-NPs.

Viaarxiv icon

ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns' Semantic Properties and their Prototypicality

Oct 12, 2021
Marianna Apidianaki, Aina Garí Soler

Figure 1 for ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns' Semantic Properties and their Prototypicality
Figure 2 for ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns' Semantic Properties and their Prototypicality
Figure 3 for ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns' Semantic Properties and their Prototypicality
Figure 4 for ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns' Semantic Properties and their Prototypicality

Large scale language models encode rich commonsense knowledge acquired through exposure to massive data during pre-training, but their understanding of entities and their semantic properties is unclear. We probe BERT (Devlin et al., 2019) for the properties of English nouns as expressed by adjectives that do not restrict the reference scope of the noun they modify (as in "red car"), but instead emphasise some inherent aspect ("red strawberry"). We base our study on psycholinguistics datasets that capture the association strength between nouns and their semantic features. We probe BERT using cloze tasks and in a classification setting, and show that the model has marginal knowledge of these features and their prevalence as expressed in these datasets. We discuss factors that make evaluation challenging and impede drawing general conclusions about the models' knowledge of noun properties. Finally, we show that when tested in a fine-tuning setting addressing entailment, BERT successfully leverages the information needed for reasoning about the meaning of adjective-noun constructions outperforming previous methods.

* Accepted to BlackboxNLP 2021 
Viaarxiv icon