Picture for Daniel Campos

Daniel Campos

Ragnarök: A Reusable RAG Framework and Baselines for TREC 2024 Retrieval-Augmented Generation Track

Add code
Jun 24, 2024
Figure 1 for Ragnarök: A Reusable RAG Framework and Baselines for TREC 2024 Retrieval-Augmented Generation Track
Figure 2 for Ragnarök: A Reusable RAG Framework and Baselines for TREC 2024 Retrieval-Augmented Generation Track
Figure 3 for Ragnarök: A Reusable RAG Framework and Baselines for TREC 2024 Retrieval-Augmented Generation Track
Figure 4 for Ragnarök: A Reusable RAG Framework and Baselines for TREC 2024 Retrieval-Augmented Generation Track
Viaarxiv icon

Synthetic Test Collections for Retrieval Evaluation

Add code
May 13, 2024
Figure 1 for Synthetic Test Collections for Retrieval Evaluation
Figure 2 for Synthetic Test Collections for Retrieval Evaluation
Figure 3 for Synthetic Test Collections for Retrieval Evaluation
Figure 4 for Synthetic Test Collections for Retrieval Evaluation
Viaarxiv icon

Arctic-Embed: Scalable, Efficient, and Accurate Text Embedding Models

Add code
May 08, 2024
Viaarxiv icon

Overview of the TREC 2023 Product Product Search Track

Add code
Nov 15, 2023
Viaarxiv icon

Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders

Add code
Apr 17, 2023
Figure 1 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Figure 2 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Figure 3 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Figure 4 for Quick Dense Retrievers Consume KALE: Post Training Kullback Leibler Alignment of Embeddings for Asymmetrical dual encoders
Viaarxiv icon

Noise-Robust Dense Retrieval via Contrastive Alignment Post Training

Add code
Apr 10, 2023
Figure 1 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Figure 2 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Figure 3 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Figure 4 for Noise-Robust Dense Retrieval via Contrastive Alignment Post Training
Viaarxiv icon

To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency

Add code
Apr 05, 2023
Figure 1 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Figure 2 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Figure 3 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Figure 4 for To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency
Viaarxiv icon

oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes

Add code
Apr 04, 2023
Figure 1 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Figure 2 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Figure 3 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Figure 4 for oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Viaarxiv icon

Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval

Add code
Mar 31, 2023
Figure 1 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Figure 2 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Figure 3 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Figure 4 for Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval
Viaarxiv icon

Compressing Cross-Lingual Multi-Task Models at Qualtrics

Add code
Nov 29, 2022
Figure 1 for Compressing Cross-Lingual Multi-Task Models at Qualtrics
Figure 2 for Compressing Cross-Lingual Multi-Task Models at Qualtrics
Figure 3 for Compressing Cross-Lingual Multi-Task Models at Qualtrics
Figure 4 for Compressing Cross-Lingual Multi-Task Models at Qualtrics
Viaarxiv icon