Alert button
Picture for Bin Bi

Bin Bi

Alert button

SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels

Add code
Bookmark button
Alert button
Mar 14, 2021
Chenliang Li, Ming Yan, Haiyang Xu, Fuli Luo, Wei Wang, Bin Bi, Songfang Huang

Figure 1 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Figure 2 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Figure 3 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Figure 4 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Viaarxiv icon

Latent Template Induction with Gumbel-CRFs

Add code
Bookmark button
Alert button
Nov 29, 2020
Yao Fu, Chuanqi Tan, Bin Bi, Mosha Chen, Yansong Feng, Alexander M. Rush

Figure 1 for Latent Template Induction with Gumbel-CRFs
Figure 2 for Latent Template Induction with Gumbel-CRFs
Figure 3 for Latent Template Induction with Gumbel-CRFs
Figure 4 for Latent Template Induction with Gumbel-CRFs
Viaarxiv icon

VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation

Add code
Bookmark button
Alert button
Oct 30, 2020
Fuli Luo, Wei Wang, Jiahao Liu, Yijia Liu, Bin Bi, Songfang Huang, Fei Huang, Luo Si

Figure 1 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Figure 2 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Figure 3 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Figure 4 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Viaarxiv icon

PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation

Add code
Bookmark button
Alert button
Apr 14, 2020
Bin Bi, Chenliang Li, Chen Wu, Ming Yan, Wei Wang

Figure 1 for PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Figure 2 for PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Figure 3 for PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Figure 4 for PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Viaarxiv icon

Symmetric Regularization based BERT for Pair-wise Semantic Reasoning

Add code
Bookmark button
Alert button
Sep 08, 2019
Xingyi Cheng, Weidi Xu, Kunlong Chen, Wei Wang, Bin Bi, Ming Yan, Chen Wu, Luo Si, Wei Chu, Taifeng Wang

Figure 1 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Figure 2 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Figure 3 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Figure 4 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Viaarxiv icon

Incorporating External Knowledge into Machine Reading for Generative Question Answering

Add code
Bookmark button
Alert button
Sep 06, 2019
Bin Bi, Chen Wu, Ming Yan, Wei Wang, Jiangnan Xia, Chenliang Li

Figure 1 for Incorporating External Knowledge into Machine Reading for Generative Question Answering
Figure 2 for Incorporating External Knowledge into Machine Reading for Generative Question Answering
Figure 3 for Incorporating External Knowledge into Machine Reading for Generative Question Answering
Figure 4 for Incorporating External Knowledge into Machine Reading for Generative Question Answering
Viaarxiv icon

StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding

Add code
Bookmark button
Alert button
Aug 16, 2019
Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, Luo Si

Figure 1 for StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Figure 2 for StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Figure 3 for StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Figure 4 for StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Viaarxiv icon

A Deep Cascade Model for Multi-Document Reading Comprehension

Add code
Bookmark button
Alert button
Nov 28, 2018
Ming Yan, Jiangnan Xia, Chen Wu, Bin Bi, Zhongzhou Zhao, Ji Zhang, Luo Si, Rui Wang, Wei Wang, Haiqing Chen

Figure 1 for A Deep Cascade Model for Multi-Document Reading Comprehension
Figure 2 for A Deep Cascade Model for Multi-Document Reading Comprehension
Figure 3 for A Deep Cascade Model for Multi-Document Reading Comprehension
Figure 4 for A Deep Cascade Model for Multi-Document Reading Comprehension
Viaarxiv icon

A Neural Comprehensive Ranker (NCR) for Open-Domain Question Answering

Add code
Bookmark button
Alert button
Oct 05, 2017
Bin Bi, Hao Ma

Figure 1 for A Neural Comprehensive Ranker (NCR) for Open-Domain Question Answering
Figure 2 for A Neural Comprehensive Ranker (NCR) for Open-Domain Question Answering
Viaarxiv icon

KeyVec: Key-semantics Preserving Document Representations

Add code
Bookmark button
Alert button
Sep 27, 2017
Bin Bi, Hao Ma

Figure 1 for KeyVec: Key-semantics Preserving Document Representations
Figure 2 for KeyVec: Key-semantics Preserving Document Representations
Figure 3 for KeyVec: Key-semantics Preserving Document Representations
Viaarxiv icon