Alert button
Picture for ChengXiang Zhai

ChengXiang Zhai

Alert button

Noise-Robust Dense Retrieval via Contrastive Alignment Post Training

Apr 10, 2023
Daniel Campos, ChengXiang Zhai, Alessandro Magnani

Figure 1 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Figure 2 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Figure 3 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Figure 4 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Viaarxiv icon

To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency

Apr 05, 2023
Daniel Campos, ChengXiang Zhai

Figure 1 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Figure 2 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Figure 3 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Figure 4 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Viaarxiv icon

oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes

Apr 04, 2023
Daniel Campos, Alexandre Marques, Mark Kurtz, ChengXiang Zhai

Figure 1 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Figure 2 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Figure 3 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Figure 4 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Viaarxiv icon

Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval

Mar 31, 2023
Daniel Campos, ChengXiang Zhai

Figure 1 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Figure 2 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Figure 3 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Figure 4 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Viaarxiv icon

Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders

Mar 31, 2023
Daniel Campos, Alessandro Magnani, ChengXiang Zhai

Figure 1 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Figure 2 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Figure 3 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Figure 4 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Viaarxiv icon

Competence-Based Analysis of Language Models

Mar 01, 2023
Adam Davies, Jize Jiang, ChengXiang Zhai

Figure 1 for Competence-Based Analysis of Language Models
Figure 2 for Competence-Based Analysis of Language Models
Figure 3 for Competence-Based Analysis of Language Models
Figure 4 for Competence-Based Analysis of Language Models
Viaarxiv icon

Entity Set Co-Expansion in StackOverflow

Dec 05, 2022
Yu Zhang, Yunyi Zhang, Yucheng Jiang, Martin Michalski, Yu Deng, Lucian Popa, ChengXiang Zhai, Jiawei Han

Figure 1 for Entity Set Co-Expansion in StackOverflow
Figure 2 for Entity Set Co-Expansion in StackOverflow
Figure 3 for Entity Set Co-Expansion in StackOverflow
Viaarxiv icon

CONCRETE: Improving Cross-lingual Fact-checking with Cross-lingual Retrieval

Sep 05, 2022
Kung-Hsiang Huang, ChengXiang Zhai, Heng Ji

Figure 1 for CONCRETE: Improving Cross-lingual Fact-checking with Cross-lingual Retrieval
Figure 2 for CONCRETE: Improving Cross-lingual Fact-checking with Cross-lingual Retrieval
Figure 3 for CONCRETE: Improving Cross-lingual Fact-checking with Cross-lingual Retrieval
Figure 4 for CONCRETE: Improving Cross-lingual Fact-checking with Cross-lingual Retrieval
Viaarxiv icon

Sparse*BERT: Sparse Models are Robust

May 25, 2022
Daniel Campos, Alexandre Marques, Tuan Nguyen, Mark Kurtz, ChengXiang Zhai

Figure 1 for Sparse*BERT: Sparse Models are Robust
Figure 2 for Sparse*BERT: Sparse Models are Robust
Figure 3 for Sparse*BERT: Sparse Models are Robust
Figure 4 for Sparse*BERT: Sparse Models are Robust
Viaarxiv icon