Alert button
Picture for Michael Zeng

Michael Zeng

Alert button

ReCo: Region-Controlled Text-to-Image Generation

Add code
Bookmark button
Alert button
Nov 23, 2022
Zhengyuan Yang, Jianfeng Wang, Zhe Gan, Linjie Li, Kevin Lin, Chenfei Wu, Nan Duan, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang

Figure 1 for ReCo: Region-Controlled Text-to-Image Generation
Figure 2 for ReCo: Region-Controlled Text-to-Image Generation
Figure 3 for ReCo: Region-Controlled Text-to-Image Generation
Figure 4 for ReCo: Region-Controlled Text-to-Image Generation
Viaarxiv icon

UniSumm: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning

Add code
Bookmark button
Alert button
Nov 21, 2022
Yulong Chen, Yang Liu, Ruochen Xu, Ziyi Yang, Chenguang Zhu, Michael Zeng, Yue Zhang

Figure 1 for UniSumm: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning
Figure 2 for UniSumm: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning
Figure 3 for UniSumm: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning
Figure 4 for UniSumm: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning
Viaarxiv icon

MACSum: Controllable Summarization with Mixed Attributes

Add code
Bookmark button
Alert button
Nov 09, 2022
Yusen Zhang, Yang Liu, Ziyi Yang, Yuwei Fang, Yulong Chen, Dragomir Radev, Chenguang Zhu, Michael Zeng, Rui Zhang

Figure 1 for MACSum: Controllable Summarization with Mixed Attributes
Figure 2 for MACSum: Controllable Summarization with Mixed Attributes
Figure 3 for MACSum: Controllable Summarization with Mixed Attributes
Figure 4 for MACSum: Controllable Summarization with Mixed Attributes
Viaarxiv icon

A comprehensive study on self-supervised distillation for speaker representation learning

Add code
Bookmark button
Alert button
Oct 28, 2022
Zhengyang Chen, Yao Qian, Bing Han, Yanmin Qian, Michael Zeng

Figure 1 for A comprehensive study on self-supervised distillation for speaker representation learning
Figure 2 for A comprehensive study on self-supervised distillation for speaker representation learning
Figure 3 for A comprehensive study on self-supervised distillation for speaker representation learning
Figure 4 for A comprehensive study on self-supervised distillation for speaker representation learning
Viaarxiv icon

Task Compass: Scaling Multi-task Pre-training with Task Prefix

Add code
Bookmark button
Alert button
Oct 12, 2022
Zhuosheng Zhang, Shuohang Wang, Yichong Xu, Yuwei Fang, Wenhao Yu, Yang Liu, Hai Zhao, Chenguang Zhu, Michael Zeng

Figure 1 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 2 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 3 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Figure 4 for Task Compass: Scaling Multi-task Pre-training with Task Prefix
Viaarxiv icon

Generate rather than Retrieve: Large Language Models are Strong Context Generators

Add code
Bookmark button
Alert button
Sep 29, 2022
Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, Meng Jiang

Figure 1 for Generate rather than Retrieve: Large Language Models are Strong Context Generators
Figure 2 for Generate rather than Retrieve: Large Language Models are Strong Context Generators
Figure 3 for Generate rather than Retrieve: Large Language Models are Strong Context Generators
Figure 4 for Generate rather than Retrieve: Large Language Models are Strong Context Generators
Viaarxiv icon

Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization

Add code
Bookmark button
Alert button
Aug 21, 2022
Pengcheng He, Baolin Peng, Liyang Lu, Song Wang, Jie Mei, Yang Liu, Ruochen Xu, Hany Hassan Awadalla, Yu Shi, Chenguang Zhu, Wayne Xiong, Michael Zeng, Jianfeng Gao, Xuedong Huang

Figure 1 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Figure 2 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Figure 3 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Figure 4 for Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Viaarxiv icon

Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning

Add code
Bookmark button
Alert button
Jun 03, 2022
Yujia Xie, Luowei Zhou, Xiyang Dai, Lu Yuan, Nguyen Bach, Ce Liu, Michael Zeng

Figure 1 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Figure 2 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Figure 3 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Figure 4 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Viaarxiv icon

Automatic Rule Induction for Efficient Semi-Supervised Learning

Add code
Bookmark button
Alert button
May 20, 2022
Reid Pryzant, Ziyi Yang, Yichong Xu, Chenguang Zhu, Michael Zeng

Figure 1 for Automatic Rule Induction for Efficient Semi-Supervised Learning
Figure 2 for Automatic Rule Induction for Efficient Semi-Supervised Learning
Figure 3 for Automatic Rule Induction for Efficient Semi-Supervised Learning
Figure 4 for Automatic Rule Induction for Efficient Semi-Supervised Learning
Viaarxiv icon