Picture for Mehdi Rezagholizadeh

Mehdi Rezagholizadeh

Huawei Noah's Ark Lab

Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages

Add code
Oct 18, 2022
Figure 1 for Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages
Figure 2 for Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages
Figure 3 for Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages
Viaarxiv icon

DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation

Add code
Oct 14, 2022
Figure 1 for DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
Figure 2 for DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
Figure 3 for DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
Figure 4 for DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
Viaarxiv icon

Integer Fine-tuning of Transformer-based Models

Add code
Sep 20, 2022
Figure 1 for Integer Fine-tuning of Transformer-based Models
Figure 2 for Integer Fine-tuning of Transformer-based Models
Figure 3 for Integer Fine-tuning of Transformer-based Models
Figure 4 for Integer Fine-tuning of Transformer-based Models
Viaarxiv icon

Learning Functions on Multiple Sets using Multi-Set Transformers

Add code
Jun 30, 2022
Figure 1 for Learning Functions on Multiple Sets using Multi-Set Transformers
Figure 2 for Learning Functions on Multiple Sets using Multi-Set Transformers
Figure 3 for Learning Functions on Multiple Sets using Multi-Set Transformers
Figure 4 for Learning Functions on Multiple Sets using Multi-Set Transformers
Viaarxiv icon

Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models

Add code
May 25, 2022
Figure 1 for Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models
Figure 2 for Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models
Figure 3 for Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models
Figure 4 for Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models
Viaarxiv icon

Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding

Add code
May 21, 2022
Figure 1 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 2 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 3 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 4 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Viaarxiv icon

Dynamic Position Encoding for Transformers

Add code
Apr 18, 2022
Figure 1 for Dynamic Position Encoding for Transformers
Figure 2 for Dynamic Position Encoding for Transformers
Figure 3 for Dynamic Position Encoding for Transformers
Figure 4 for Dynamic Position Encoding for Transformers
Viaarxiv icon

CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation

Add code
Apr 15, 2022
Figure 1 for CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation
Figure 2 for CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation
Figure 3 for CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation
Figure 4 for CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation
Viaarxiv icon

When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation

Add code
Mar 17, 2022
Figure 1 for When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation
Figure 2 for When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation
Figure 3 for When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation
Figure 4 for When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation
Viaarxiv icon

JABER and SABER: Junior and Senior Arabic BERt

Add code
Jan 09, 2022
Figure 1 for JABER and SABER: Junior and Senior Arabic BERt
Figure 2 for JABER and SABER: Junior and Senior Arabic BERt
Figure 3 for JABER and SABER: Junior and Senior Arabic BERt
Figure 4 for JABER and SABER: Junior and Senior Arabic BERt
Viaarxiv icon