Picture for Chenyan Xiong

Chenyan Xiong

Microsoft Research

Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories

Add code
Feb 07, 2023
Figure 1 for Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories
Figure 2 for Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories
Figure 3 for Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories
Figure 4 for Augmenting Zero-Shot Dense Retrievers with Plug-in Mixture-of-Memories
Viaarxiv icon

ClueWeb22: 10 Billion Web Documents with Visual and Semantic Information

Add code
Dec 02, 2022
Figure 1 for ClueWeb22: 10 Billion Web Documents with Visual and Semantic Information
Figure 2 for ClueWeb22: 10 Billion Web Documents with Visual and Semantic Information
Figure 3 for ClueWeb22: 10 Billion Web Documents with Visual and Semantic Information
Figure 4 for ClueWeb22: 10 Billion Web Documents with Visual and Semantic Information
Viaarxiv icon

Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives

Add code
Oct 31, 2022
Figure 1 for Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives
Figure 2 for Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives
Figure 3 for Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives
Figure 4 for Reduce Catastrophic Forgetting of Dense Retrieval Training with Teleportation Negatives
Viaarxiv icon

COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning

Add code
Oct 27, 2022
Figure 1 for COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning
Figure 2 for COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning
Figure 3 for COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning
Figure 4 for COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning
Viaarxiv icon

Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder

Add code
May 06, 2022
Figure 1 for Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder
Figure 2 for Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder
Figure 3 for Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder
Figure 4 for Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder
Viaarxiv icon

P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning

Add code
May 05, 2022
Figure 1 for P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning
Figure 2 for P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning
Figure 3 for P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning
Figure 4 for P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning
Viaarxiv icon

METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals

Add code
Apr 16, 2022
Figure 1 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 2 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 3 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 4 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Viaarxiv icon

Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators

Add code
Apr 07, 2022
Figure 1 for Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators
Figure 2 for Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators
Figure 3 for Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators
Figure 4 for Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators
Viaarxiv icon

Neural Approaches to Conversational Information Retrieval

Add code
Jan 13, 2022
Figure 1 for Neural Approaches to Conversational Information Retrieval
Figure 2 for Neural Approaches to Conversational Information Retrieval
Figure 3 for Neural Approaches to Conversational Information Retrieval
Figure 4 for Neural Approaches to Conversational Information Retrieval
Viaarxiv icon

Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representations

Add code
Oct 14, 2021
Figure 1 for Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representations
Figure 2 for Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representations
Figure 3 for Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representations
Figure 4 for Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representations
Viaarxiv icon