Picture for Michael Zeng

Michael Zeng

ReCo: Region-Controlled Text-to-Image Generation

Add code
Nov 23, 2022
Viaarxiv icon

MACSum: Controllable Summarization with Mixed Attributes

Add code
Nov 09, 2022
Viaarxiv icon

A comprehensive study on self-supervised distillation for speaker representation learning

Add code
Oct 28, 2022
Figure 1 for A comprehensive study on self-supervised distillation for speaker representation learning
Figure 2 for A comprehensive study on self-supervised distillation for speaker representation learning
Figure 3 for A comprehensive study on self-supervised distillation for speaker representation learning
Figure 4 for A comprehensive study on self-supervised distillation for speaker representation learning
Viaarxiv icon

Task Compass: Scaling Multi-task Pre-training with Task Prefix

Add code
Oct 12, 2022
Figure 1 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 2 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 3 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 4 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Viaarxiv icon

Generate rather than Retrieve: Large Language Models are Strong Context Generators

Add code
Sep 29, 2022
Figure 1 for Generate rather than Retrieve: Large Language Models are Strong Context Generators
Figure 2 for Generate rather than Retrieve: Large Language Models are Strong Context Generators
Figure 3 for Generate rather than Retrieve: Large Language Models are Strong Context Generators
Figure 4 for Generate rather than Retrieve: Large Language Models are Strong Context Generators
Viaarxiv icon

Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization

Add code
Aug 21, 2022
Figure 1 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Figure 2 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Figure 3 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Figure 4 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Viaarxiv icon

Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning

Add code
Jun 03, 2022
Figure 1 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Figure 2 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Figure 3 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Figure 4 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Viaarxiv icon

Automatic Rule Induction for Efficient Semi-Supervised Learning

Add code
May 20, 2022
Figure 1 for Automatic Rule Induction for Efficient Semi-Supervised Learning
Figure 2 for Automatic Rule Induction for Efficient Semi-Supervised Learning
Figure 3 for Automatic Rule Induction for Efficient Semi-Supervised Learning
Figure 4 for Automatic Rule Induction for Efficient Semi-Supervised Learning
Viaarxiv icon

i-Code: An Integrative and Composable Multimodal Learning Framework

Add code
May 05, 2022
Figure 1 for i-Code: An Integrative and Composable Multimodal Learning Framework
Figure 2 for i-Code: An Integrative and Composable Multimodal Learning Framework
Figure 3 for i-Code: An Integrative and Composable Multimodal Learning Framework
Figure 4 for i-Code: An Integrative and Composable Multimodal Learning Framework
Viaarxiv icon

Impossible Triangle: What's Next for Pre-trained Language Models?

Add code
Apr 20, 2022
Figure 1 for Impossible Triangle: What's Next for Pre-trained Language Models?
Viaarxiv icon