Alert button
Picture for Daniel Campos

Daniel Campos

Alert button

Overview of the TREC 2023 Product Product Search Track

Add code
Bookmark button
Alert button
Nov 15, 2023
Daniel Campos, Surya Kallumadi, Corby Rosset, Cheng Xiang Zhai, Alessandro Magnani

Viaarxiv icon

Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders

Add code
Bookmark button
Alert button
Apr 17, 2023
Daniel Campos, Alessandro Magnani, ChengXiang Zhai

Figure 1 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Figure 2 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Figure 3 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Figure 4 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Viaarxiv icon

Noise-Robust Dense Retrieval via Contrastive Alignment Post Training

Add code
Bookmark button
Alert button
Apr 10, 2023
Daniel Campos, ChengXiang Zhai, Alessandro Magnani

Figure 1 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Figure 2 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Figure 3 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Figure 4 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Viaarxiv icon

To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency

Add code
Bookmark button
Alert button
Apr 05, 2023
Daniel Campos, ChengXiang Zhai

Figure 1 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Figure 2 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Figure 3 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Figure 4 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Viaarxiv icon

oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes

Add code
Bookmark button
Alert button
Apr 04, 2023
Daniel Campos, Alexandre Marques, Mark Kurtz, ChengXiang Zhai

Figure 1 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Figure 2 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Figure 3 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Figure 4 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Viaarxiv icon

Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval

Add code
Bookmark button
Alert button
Mar 31, 2023
Daniel Campos, ChengXiang Zhai

Figure 1 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Figure 2 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Figure 3 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Figure 4 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Viaarxiv icon

Compressing Cross-Lingual Multi-Task Models at Qualtrics

Add code
Bookmark button
Alert button
Nov 29, 2022
Daniel Campos, Daniel Perry, Samir Joshi, Yashmeet Gambhir, Wei Du, Zhengzheng Xing, Aaron Colak

Figure 1 for Compressing Cross-Lingual Multi-Task Models at Qualtrics
Figure 2 for Compressing Cross-Lingual Multi-Task Models at Qualtrics
Figure 3 for Compressing Cross-Lingual Multi-Task Models at Qualtrics
Figure 4 for Compressing Cross-Lingual Multi-Task Models at Qualtrics
Viaarxiv icon

Sparse*BERT: Sparse Models are Robust

Add code
Bookmark button
Alert button
May 25, 2022
Daniel Campos, Alexandre Marques, Tuan Nguyen, Mark Kurtz, ChengXiang Zhai

Figure 1 for Sparse*BERT: Sparse Models are Robust
Figure 2 for Sparse*BERT: Sparse Models are Robust
Figure 3 for Sparse*BERT: Sparse Models are Robust
Figure 4 for Sparse*BERT: Sparse Models are Robust
Viaarxiv icon