Picture for Weizhu Chen

Weizhu Chen

GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation

Add code
Nov 18, 2022
Figure 1 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 2 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 3 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 4 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Viaarxiv icon

Soft-Labeled Contrastive Pre-training for Function-level Code Representation

Add code
Oct 18, 2022
Figure 1 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 2 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 3 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 4 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Viaarxiv icon

Less is More: Task-aware Layer-wise Distillation for Language Model Compression

Add code
Oct 05, 2022
Figure 1 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 2 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 3 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 4 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Viaarxiv icon

CodeT: Code Generation with Generated Tests

Add code
Jul 21, 2022
Figure 1 for CodeT: Code Generation with Generated Tests
Figure 2 for CodeT: Code Generation with Generated Tests
Figure 3 for CodeT: Code Generation with Generated Tests
Figure 4 for CodeT: Code Generation with Generated Tests
Viaarxiv icon

OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering

Add code
Jul 08, 2022
Figure 1 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Figure 2 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Figure 3 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Figure 4 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Viaarxiv icon

Joint Generator-Ranker Learning for Natural Language Generation

Add code
Jun 28, 2022
Figure 1 for Joint Generator-Ranker Learning for Natural Language Generation
Figure 2 for Joint Generator-Ranker Learning for Natural Language Generation
Figure 3 for Joint Generator-Ranker Learning for Natural Language Generation
Figure 4 for Joint Generator-Ranker Learning for Natural Language Generation
Viaarxiv icon

PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance

Add code
Jun 25, 2022
Figure 1 for PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Figure 2 for PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Figure 3 for PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Figure 4 for PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Viaarxiv icon

CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation

Add code
Jun 14, 2022
Figure 1 for CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation
Figure 2 for CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation
Figure 3 for CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation
Figure 4 for CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation
Viaarxiv icon

On the Advance of Making Language Models Better Reasoners

Add code
Jun 07, 2022
Figure 1 for On the Advance of Making Language Models Better Reasoners
Figure 2 for On the Advance of Making Language Models Better Reasoners
Figure 3 for On the Advance of Making Language Models Better Reasoners
Figure 4 for On the Advance of Making Language Models Better Reasoners
Viaarxiv icon

Diffusion-GAN: Training GANs with Diffusion

Add code
Jun 05, 2022
Figure 1 for Diffusion-GAN: Training GANs with Diffusion
Figure 2 for Diffusion-GAN: Training GANs with Diffusion
Figure 3 for Diffusion-GAN: Training GANs with Diffusion
Figure 4 for Diffusion-GAN: Training GANs with Diffusion
Viaarxiv icon