Picture for Subhabrata Mukherjee

Subhabrata Mukherjee

LiteTransformerSearch: Training-free On-device Search for Efficient Autoregressive Language Models

Add code
Mar 04, 2022
Figure 1 for LiteTransformerSearch: Training-free On-device Search for Efficient Autoregressive Language Models
Figure 2 for LiteTransformerSearch: Training-free On-device Search for Efficient Autoregressive Language Models
Figure 3 for LiteTransformerSearch: Training-free On-device Search for Efficient Autoregressive Language Models
Figure 4 for LiteTransformerSearch: Training-free On-device Search for Efficient Autoregressive Language Models
Viaarxiv icon

AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models

Add code
Jan 29, 2022
Figure 1 for AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models
Figure 2 for AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models
Figure 3 for AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models
Figure 4 for AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models
Viaarxiv icon

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

Add code
Nov 04, 2021
Figure 1 for CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
Figure 2 for CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
Figure 3 for CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
Figure 4 for CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
Viaarxiv icon

What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression

Add code
Oct 16, 2021
Figure 1 for What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression
Figure 2 for What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression
Figure 3 for What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression
Figure 4 for What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression
Viaarxiv icon

LiST: Lite Self-training Makes Efficient Few-shot Learners

Add code
Oct 12, 2021
Figure 1 for LiST: Lite Self-training Makes Efficient Few-shot Learners
Figure 2 for LiST: Lite Self-training Makes Efficient Few-shot Learners
Figure 3 for LiST: Lite Self-training Makes Efficient Few-shot Learners
Figure 4 for LiST: Lite Self-training Makes Efficient Few-shot Learners
Viaarxiv icon

Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU

Add code
Sep 17, 2021
Figure 1 for Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU
Figure 2 for Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU
Figure 3 for Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU
Figure 4 for Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU
Viaarxiv icon

Fairness via Representation Neutralization

Add code
Jun 23, 2021
Figure 1 for Fairness via Representation Neutralization
Figure 2 for Fairness via Representation Neutralization
Figure 3 for Fairness via Representation Neutralization
Figure 4 for Fairness via Representation Neutralization
Viaarxiv icon

XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation

Add code
Jun 12, 2021
Figure 1 for XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation
Figure 2 for XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation
Figure 3 for XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation
Figure 4 for XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation
Viaarxiv icon

MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning

Add code
Apr 16, 2021
Figure 1 for MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning
Figure 2 for MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning
Figure 3 for MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning
Figure 4 for MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning
Viaarxiv icon

Self-Training with Weak Supervision

Add code
Apr 12, 2021
Figure 1 for Self-Training with Weak Supervision
Figure 2 for Self-Training with Weak Supervision
Figure 3 for Self-Training with Weak Supervision
Figure 4 for Self-Training with Weak Supervision
Viaarxiv icon