Alert button
Picture for Ziyi Yang

Ziyi Yang

Alert button

Unifying Vision, Text, and Layout for Universal Document Processing

Add code
Bookmark button
Alert button
Dec 20, 2022
Zineng Tang, Ziyi Yang, Guoxin Wang, Yuwei Fang, Yang Liu, Chenguang Zhu, Michael Zeng, Cha Zhang, Mohit Bansal

Figure 1 for Unifying Vision, Text, and Layout for Universal Document Processing
Figure 2 for Unifying Vision, Text, and Layout for Universal Document Processing
Figure 3 for Unifying Vision, Text, and Layout for Universal Document Processing
Figure 4 for Unifying Vision, Text, and Layout for Universal Document Processing
Viaarxiv icon

APOLLO: A Simple Approach for Adaptive Pretraining of Language Models for Logical Reasoning

Add code
Bookmark button
Alert button
Dec 19, 2022
Soumya Sanyal, Yichong Xu, Shuohang Wang, Ziyi Yang, Reid Pryzant, Wenhao Yu, Chenguang Zhu, Xiang Ren

Figure 1 for APOLLO: A Simple Approach for Adaptive Pretraining of Language Models for Logical Reasoning
Figure 2 for APOLLO: A Simple Approach for Adaptive Pretraining of Language Models for Logical Reasoning
Figure 3 for APOLLO: A Simple Approach for Adaptive Pretraining of Language Models for Logical Reasoning
Figure 4 for APOLLO: A Simple Approach for Adaptive Pretraining of Language Models for Logical Reasoning
Viaarxiv icon

UniSumm: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning

Add code
Bookmark button
Alert button
Dec 06, 2022
Yulong Chen, Yang Liu, Ruochen Xu, Ziyi Yang, Chenguang Zhu, Michael Zeng, Yue Zhang

Figure 1 for UniSumm: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning
Figure 2 for UniSumm: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning
Figure 3 for UniSumm: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning
Figure 4 for UniSumm: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning
Viaarxiv icon

Empowering Language Models with Knowledge Graph Reasoning for Question Answering

Add code
Bookmark button
Alert button
Nov 15, 2022
Ziniu Hu, Yichong Xu, Wenhao Yu, Shuohang Wang, Ziyi Yang, Chenguang Zhu, Kai-Wei Chang, Yizhou Sun

Figure 1 for Empowering Language Models with Knowledge Graph Reasoning for Question Answering
Figure 2 for Empowering Language Models with Knowledge Graph Reasoning for Question Answering
Figure 3 for Empowering Language Models with Knowledge Graph Reasoning for Question Answering
Figure 4 for Empowering Language Models with Knowledge Graph Reasoning for Question Answering
Viaarxiv icon

MACSum: Controllable Summarization with Mixed Attributes

Add code
Bookmark button
Alert button
Nov 09, 2022
Yusen Zhang, Yang Liu, Ziyi Yang, Yuwei Fang, Yulong Chen, Dragomir Radev, Chenguang Zhu, Michael Zeng, Rui Zhang

Figure 1 for MACSum: Controllable Summarization with Mixed Attributes
Figure 2 for MACSum: Controllable Summarization with Mixed Attributes
Figure 3 for MACSum: Controllable Summarization with Mixed Attributes
Figure 4 for MACSum: Controllable Summarization with Mixed Attributes
Viaarxiv icon

Tail Batch Sampling: Approximating Global Contrastive Losses as Optimization over Batch Assignments

Add code
Bookmark button
Alert button
Oct 23, 2022
Vin Sachidananda, Ziyi Yang, Chenguang Zhu

Figure 1 for Tail Batch Sampling: Approximating Global Contrastive Losses as Optimization over Batch Assignments
Figure 2 for Tail Batch Sampling: Approximating Global Contrastive Losses as Optimization over Batch Assignments
Figure 3 for Tail Batch Sampling: Approximating Global Contrastive Losses as Optimization over Batch Assignments
Figure 4 for Tail Batch Sampling: Approximating Global Contrastive Losses as Optimization over Batch Assignments
Viaarxiv icon

Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners

Add code
Bookmark button
Alert button
May 29, 2022
Zhenhailong Wang, Manling Li, Ruochen Xu, Luowei Zhou, Jie Lei, Xudong Lin, Shuohang Wang, Ziyi Yang, Chenguang Zhu, Derek Hoiem, Shih-Fu Chang, Mohit Bansal, Heng Ji

Figure 1 for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
Figure 2 for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
Figure 3 for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
Figure 4 for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
Viaarxiv icon