Picture for Aref Jafari

Aref Jafari

HAGRID: A Human-LLM Collaborative Dataset for Generative Information-Seeking with Attribution

Add code
Jul 31, 2023
Figure 1 for HAGRID: A Human-LLM Collaborative Dataset for Generative Information-Seeking with Attribution
Figure 2 for HAGRID: A Human-LLM Collaborative Dataset for Generative Information-Seeking with Attribution
Figure 3 for HAGRID: A Human-LLM Collaborative Dataset for Generative Information-Seeking with Attribution
Figure 4 for HAGRID: A Human-LLM Collaborative Dataset for Generative Information-Seeking with Attribution
Viaarxiv icon

Improved knowledge distillation by utilizing backward pass knowledge in neural networks

Add code
Jan 27, 2023
Figure 1 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Figure 2 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Figure 3 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Figure 4 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Viaarxiv icon

Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization

Add code
Dec 12, 2022
Figure 1 for Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization
Figure 2 for Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization
Figure 3 for Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization
Figure 4 for Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization
Viaarxiv icon

Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models

Add code
May 25, 2022
Figure 1 for Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models
Figure 2 for Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models
Figure 3 for Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models
Figure 4 for Towards Understanding Label Regularization for Fine-tuning Pre-trained Language Models
Viaarxiv icon

Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher

Add code
Oct 16, 2021
Figure 1 for Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher
Figure 2 for Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher
Figure 3 for Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher
Figure 4 for Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher
Viaarxiv icon

How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding

Add code
Sep 20, 2021
Figure 1 for How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding
Figure 2 for How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding
Figure 3 for How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding
Figure 4 for How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding
Viaarxiv icon

Annealing Knowledge Distillation

Add code
Apr 14, 2021
Figure 1 for Annealing Knowledge Distillation
Figure 2 for Annealing Knowledge Distillation
Figure 3 for Annealing Knowledge Distillation
Figure 4 for Annealing Knowledge Distillation
Viaarxiv icon

Segmentation Approach for Coreference Resolution Task

Add code
Jun 30, 2020
Figure 1 for Segmentation Approach for Coreference Resolution Task
Figure 2 for Segmentation Approach for Coreference Resolution Task
Figure 3 for Segmentation Approach for Coreference Resolution Task
Viaarxiv icon