Picture for Mohammad Taher Pilehvar

Mohammad Taher Pilehvar

BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages

Add code
Jun 14, 2024
Figure 1 for BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
Figure 2 for BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
Figure 3 for BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
Figure 4 for BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
Viaarxiv icon

Tell Me Why: Explainable Public Health Fact-Checking with Large Language Models

Add code
May 15, 2024
Figure 1 for Tell Me Why: Explainable Public Health Fact-Checking with Large Language Models
Figure 2 for Tell Me Why: Explainable Public Health Fact-Checking with Large Language Models
Figure 3 for Tell Me Why: Explainable Public Health Fact-Checking with Large Language Models
Figure 4 for Tell Me Why: Explainable Public Health Fact-Checking with Large Language Models
Viaarxiv icon

DiFair: A Benchmark for Disentangled Assessment of Gender Knowledge and Bias

Add code
Oct 22, 2023
Viaarxiv icon

DecompX: Explaining Transformers Decisions by Propagating Token Decomposition

Add code
Jun 05, 2023
Figure 1 for DecompX: Explaining Transformers Decisions by Propagating Token Decomposition
Figure 2 for DecompX: Explaining Transformers Decisions by Propagating Token Decomposition
Figure 3 for DecompX: Explaining Transformers Decisions by Propagating Token Decomposition
Figure 4 for DecompX: Explaining Transformers Decisions by Propagating Token Decomposition
Viaarxiv icon

Guide the Learner: Controlling Product of Experts Debiasing Method Based on Token Attribution Similarities

Add code
Feb 06, 2023
Figure 1 for Guide the Learner: Controlling Product of Experts Debiasing Method Based on Token Attribution Similarities
Figure 2 for Guide the Learner: Controlling Product of Experts Debiasing Method Based on Token Attribution Similarities
Viaarxiv icon

An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning

Add code
Feb 01, 2023
Figure 1 for An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Figure 2 for An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Figure 3 for An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Figure 4 for An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Viaarxiv icon

BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning

Add code
Nov 10, 2022
Figure 1 for BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning
Figure 2 for BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning
Figure 3 for BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning
Figure 4 for BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning
Viaarxiv icon

Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference

Add code
Nov 07, 2022
Figure 1 for Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference
Figure 2 for Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference
Figure 3 for Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference
Figure 4 for Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference
Viaarxiv icon

GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers

Add code
May 06, 2022
Figure 1 for GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
Figure 2 for GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
Figure 3 for GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
Figure 4 for GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers
Viaarxiv icon

On the Importance of Data Size in Probing Fine-tuned Models

Add code
Mar 17, 2022
Figure 1 for On the Importance of Data Size in Probing Fine-tuned Models
Figure 2 for On the Importance of Data Size in Probing Fine-tuned Models
Figure 3 for On the Importance of Data Size in Probing Fine-tuned Models
Figure 4 for On the Importance of Data Size in Probing Fine-tuned Models
Viaarxiv icon