Alert button
Picture for Ruofei Zhang

Ruofei Zhang

Alert button

BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining

Add code
Bookmark button
Alert button
Dec 31, 2020
Weizhen Qi, Yeyun Gong, Jian Jiao, Yu Yan, Dayiheng Liu, Weizhu Chen, Kewen Tang, Houqiang Li, Jiusheng Chen, Ruofei Zhang, Ming Zhou, Nan Duan

Figure 1 for BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining
Figure 2 for BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining
Figure 3 for BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining
Figure 4 for BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining
Viaarxiv icon

An Enhanced Knowledge Injection Model for Commonsense Generation

Add code
Bookmark button
Alert button
Dec 01, 2020
Zhihao Fan, Yeyun Gong, Zhongyu Wei, Siyuan Wang, Yameng Huang, Jian Jiao, Xuanjing Huang, Nan Duan, Ruofei Zhang

Figure 1 for An Enhanced Knowledge Injection Model for Commonsense Generation
Figure 2 for An Enhanced Knowledge Injection Model for Commonsense Generation
Figure 3 for An Enhanced Knowledge Injection Model for Commonsense Generation
Figure 4 for An Enhanced Knowledge Injection Model for Commonsense Generation
Viaarxiv icon

GLGE: A New General Language Generation Evaluation Benchmark

Add code
Bookmark button
Alert button
Nov 24, 2020
Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, Pengcheng Wang, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Ruofei Zhang, Winnie Wu, Ming Zhou, Nan Duan

Figure 1 for GLGE: A New General Language Generation Evaluation Benchmark
Figure 2 for GLGE: A New General Language Generation Evaluation Benchmark
Figure 3 for GLGE: A New General Language Generation Evaluation Benchmark
Figure 4 for GLGE: A New General Language Generation Evaluation Benchmark
Viaarxiv icon

ProphetNet-Ads: A Looking Ahead Strategy for Generative Retrieval Models in Sponsored Search Engine

Add code
Bookmark button
Alert button
Oct 21, 2020
Weizhen Qi, Yeyun Gong, Yu Yan, Jian Jiao, Bo Shao, Ruofei Zhang, Houqiang Li, Nan Duan, Ming Zhou

Figure 1 for ProphetNet-Ads: A Looking Ahead Strategy for Generative Retrieval Models in Sponsored Search Engine
Figure 2 for ProphetNet-Ads: A Looking Ahead Strategy for Generative Retrieval Models in Sponsored Search Engine
Figure 3 for ProphetNet-Ads: A Looking Ahead Strategy for Generative Retrieval Models in Sponsored Search Engine
Figure 4 for ProphetNet-Ads: A Looking Ahead Strategy for Generative Retrieval Models in Sponsored Search Engine
Viaarxiv icon

AutoADR: Automatic Model Design for Ad Relevance

Add code
Bookmark button
Alert button
Oct 14, 2020
Yiren Chen, Yaming Yang, Hong Sun, Yujing Wang, Yu Xu, Wei Shen, Rong Zhou, Yunhai Tong, Jing Bai, Ruofei Zhang

Figure 1 for AutoADR: Automatic Model Design for Ad Relevance
Figure 2 for AutoADR: Automatic Model Design for Ad Relevance
Figure 3 for AutoADR: Automatic Model Design for Ad Relevance
Figure 4 for AutoADR: Automatic Model Design for Ad Relevance
Viaarxiv icon

HittER: Hierarchical Transformers for Knowledge Graph Embeddings

Add code
Bookmark button
Alert button
Aug 28, 2020
Sanxing Chen, Xiaodong Liu, Jianfeng Gao, Jian Jiao, Ruofei Zhang, Yangfeng Ji

Figure 1 for HittER: Hierarchical Transformers for Knowledge Graph Embeddings
Figure 2 for HittER: Hierarchical Transformers for Knowledge Graph Embeddings
Figure 3 for HittER: Hierarchical Transformers for Knowledge Graph Embeddings
Figure 4 for HittER: Hierarchical Transformers for Knowledge Graph Embeddings
Viaarxiv icon

ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training

Add code
Bookmark button
Alert button
Feb 22, 2020
Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou

Figure 1 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Figure 2 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Figure 3 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Figure 4 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Viaarxiv icon

TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval

Add code
Bookmark button
Alert button
Feb 14, 2020
Wenhao Lu, Jian Jiao, Ruofei Zhang

Figure 1 for TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval
Figure 2 for TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval
Figure 3 for TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval
Figure 4 for TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval
Viaarxiv icon

DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks

Add code
Bookmark button
Alert button
Mar 01, 2018
Zi Yin, Keng-hao Chang, Ruofei Zhang

Figure 1 for DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks
Figure 2 for DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks
Figure 3 for DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks
Figure 4 for DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks
Viaarxiv icon