Picture for Daxin Jiang

Daxin Jiang

Inference with Reference: Lossless Acceleration of Large Language Models

Add code
Apr 10, 2023
Viaarxiv icon

Large Language Models are Diverse Role-Players for Summarization Evaluation

Add code
Mar 28, 2023
Viaarxiv icon

Lexicon-Enhanced Self-Supervised Training for Multilingual Dense Retrieval

Add code
Mar 27, 2023
Viaarxiv icon

Empowering Dual-Encoder with Query Generator for Cross-Lingual Dense Retrieval

Add code
Mar 27, 2023
Figure 1 for Empowering Dual-Encoder with Query Generator for Cross-Lingual Dense Retrieval
Figure 2 for Empowering Dual-Encoder with Query Generator for Cross-Lingual Dense Retrieval
Figure 3 for Empowering Dual-Encoder with Query Generator for Cross-Lingual Dense Retrieval
Figure 4 for Empowering Dual-Encoder with Query Generator for Cross-Lingual Dense Retrieval
Viaarxiv icon

Bridge the Gap between Language models and Tabular Understanding

Add code
Feb 16, 2023
Viaarxiv icon

LexLIP: Lexicon-Bottlenecked Language-Image Pre-Training for Large-Scale Image-Text Retrieval

Add code
Feb 06, 2023
Viaarxiv icon

Modeling Sequential Sentence Relation to Improve Cross-lingual Dense Retrieval

Add code
Feb 03, 2023
Viaarxiv icon

Fine-Grained Distillation for Long Document Retrieval

Add code
Dec 20, 2022
Figure 1 for Fine-Grained Distillation for Long Document Retrieval
Figure 2 for Fine-Grained Distillation for Long Document Retrieval
Figure 3 for Fine-Grained Distillation for Long Document Retrieval
Figure 4 for Fine-Grained Distillation for Long Document Retrieval
Viaarxiv icon

Adam: Dense Retrieval Distillation with Adaptive Dark Examples

Add code
Dec 20, 2022
Viaarxiv icon

MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers

Add code
Dec 15, 2022
Figure 1 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 2 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 3 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 4 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Viaarxiv icon