Alert button
Picture for Mohammad Taher Pilehvar

Mohammad Taher Pilehvar

Alert button

Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations

Sep 15, 2021
Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar

Figure 1 for Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Figure 2 for Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Figure 3 for Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Figure 4 for Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Viaarxiv icon

How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy

Sep 10, 2021
Sara Rajaee, Mohammad Taher Pilehvar

Figure 1 for How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy
Figure 2 for How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy
Figure 3 for How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy
Figure 4 for How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy
Viaarxiv icon

Don't Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques

Sep 01, 2021
Hossein Amirkhani, Mohammad Taher Pilehvar

Figure 1 for Don't Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques
Figure 2 for Don't Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques
Figure 3 for Don't Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques
Figure 4 for Don't Discard All the Biased Instances: Investigating a Core Assumption in Dataset Bias Mitigation Techniques
Viaarxiv icon

A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space

Jun 02, 2021
Sara Rajaee, Mohammad Taher Pilehvar

Figure 1 for A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space
Figure 2 for A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space
Figure 3 for A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space
Figure 4 for A Cluster-based Approach for Improving Isotropy in Contextual Embedding Space
Viaarxiv icon

Exploring the Role of BERT Token Representations to Explain Sentence Probing Results

Apr 03, 2021
Hosein Mohebbi, Ali Modarressi, Mohammad Taher Pilehvar

Figure 1 for Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Figure 2 for Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Figure 3 for Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Figure 4 for Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Viaarxiv icon

XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization

Oct 13, 2020
Alessandro Raganato, Tommaso Pasini, Jose Camacho-Collados, Mohammad Taher Pilehvar

Figure 1 for XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization
Figure 2 for XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization
Figure 3 for XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization
Figure 4 for XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization
Viaarxiv icon

Language Models and Word Sense Disambiguation: An Overview and Analysis

Aug 26, 2020
Daniel Loureiro, Kiamehr Rezaee, Mohammad Taher Pilehvar, Jose Camacho-Collados

Figure 1 for Language Models and Word Sense Disambiguation: An Overview and Analysis
Figure 2 for Language Models and Word Sense Disambiguation: An Overview and Analysis
Figure 3 for Language Models and Word Sense Disambiguation: An Overview and Analysis
Figure 4 for Language Models and Word Sense Disambiguation: An Overview and Analysis
Viaarxiv icon

Will-They-Won't-They: A Very Large Dataset for Stance Detection on Twitter

May 01, 2020
Costanza Conforti, Jakob Berndt, Mohammad Taher Pilehvar, Chryssi Giannitsarou, Flavio Toxvaerd, Nigel Collier

Figure 1 for Will-They-Won't-They: A Very Large Dataset for Stance Detection on Twitter
Figure 2 for Will-They-Won't-They: A Very Large Dataset for Stance Detection on Twitter
Figure 3 for Will-They-Won't-They: A Very Large Dataset for Stance Detection on Twitter
Figure 4 for Will-They-Won't-They: A Very Large Dataset for Stance Detection on Twitter
Viaarxiv icon

WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context

Apr 30, 2020
Anna Breit, Artem Revenko, Kiamehr Rezaee, Mohammad Taher Pilehvar, Jose Camacho-Collados

Figure 1 for WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context
Figure 2 for WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context
Figure 3 for WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context
Figure 4 for WiC-TSV: An Evaluation Benchmark for Target Sense Verification of Words in Context
Viaarxiv icon

On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation

Sep 30, 2019
Victor Prokhorov, Ehsan Shareghi, Yingzhen Li, Mohammad Taher Pilehvar, Nigel Collier

Figure 1 for On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation
Figure 2 for On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation
Figure 3 for On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation
Figure 4 for On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation
Viaarxiv icon