Picture for Bin Bi

Bin Bi

James

Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training

Add code
Aug 21, 2021
Figure 1 for Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training
Figure 2 for Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training
Figure 3 for Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training
Figure 4 for Grid-VLP: Revisiting Grid Features for Vision-Language Pre-training
Viaarxiv icon

E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning

Add code
Jun 04, 2021
Figure 1 for E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning
Figure 2 for E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning
Figure 3 for E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning
Figure 4 for E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning
Viaarxiv icon

StructuralLM: Structural Pre-training for Form Understanding

Add code
May 24, 2021
Figure 1 for StructuralLM: Structural Pre-training for Form Understanding
Figure 2 for StructuralLM: Structural Pre-training for Form Understanding
Figure 3 for StructuralLM: Structural Pre-training for Form Understanding
Figure 4 for StructuralLM: Structural Pre-training for Form Understanding
Viaarxiv icon

SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels

Add code
Mar 14, 2021
Figure 1 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Figure 2 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Figure 3 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Figure 4 for SemVLP: Vision-Language Pre-training by Aligning Semantics at Multiple Levels
Viaarxiv icon

Latent Template Induction with Gumbel-CRFs

Add code
Nov 29, 2020
Figure 1 for Latent Template Induction with Gumbel-CRFs
Figure 2 for Latent Template Induction with Gumbel-CRFs
Figure 3 for Latent Template Induction with Gumbel-CRFs
Figure 4 for Latent Template Induction with Gumbel-CRFs
Viaarxiv icon

VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation

Add code
Oct 30, 2020
Figure 1 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Figure 2 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Figure 3 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Figure 4 for VECO: Variable Encoder-decoder Pre-training for Cross-lingual Understanding and Generation
Viaarxiv icon

PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation

Add code
Apr 14, 2020
Figure 1 for PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Figure 2 for PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Figure 3 for PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Figure 4 for PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Viaarxiv icon

Symmetric Regularization based BERT for Pair-wise Semantic Reasoning

Add code
Sep 08, 2019
Figure 1 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Figure 2 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Figure 3 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Figure 4 for Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Viaarxiv icon

Incorporating External Knowledge into Machine Reading for Generative Question Answering

Add code
Sep 06, 2019
Figure 1 for Incorporating External Knowledge into Machine Reading for Generative Question Answering
Figure 2 for Incorporating External Knowledge into Machine Reading for Generative Question Answering
Figure 3 for Incorporating External Knowledge into Machine Reading for Generative Question Answering
Figure 4 for Incorporating External Knowledge into Machine Reading for Generative Question Answering
Viaarxiv icon

StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding

Add code
Aug 16, 2019
Figure 1 for StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Figure 2 for StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Figure 3 for StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Figure 4 for StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
Viaarxiv icon