Picture for Daniel Cer

Daniel Cer

Transforming LLMs into Cross-modal and Cross-lingual Retrieval Systems

Add code
Apr 04, 2024
Figure 1 for Transforming LLMs into Cross-modal and Cross-lingual Retrieval Systems
Figure 2 for Transforming LLMs into Cross-modal and Cross-lingual Retrieval Systems
Figure 3 for Transforming LLMs into Cross-modal and Cross-lingual Retrieval Systems
Figure 4 for Transforming LLMs into Cross-modal and Cross-lingual Retrieval Systems
Viaarxiv icon

Gecko: Versatile Text Embeddings Distilled from Large Language Models

Add code
Mar 29, 2024
Figure 1 for Gecko: Versatile Text Embeddings Distilled from Large Language Models
Figure 2 for Gecko: Versatile Text Embeddings Distilled from Large Language Models
Figure 3 for Gecko: Versatile Text Embeddings Distilled from Large Language Models
Figure 4 for Gecko: Versatile Text Embeddings Distilled from Large Language Models
Viaarxiv icon

Gemma: Open Models Based on Gemini Research and Technology

Add code
Mar 13, 2024
Figure 1 for Gemma: Open Models Based on Gemini Research and Technology
Figure 2 for Gemma: Open Models Based on Gemini Research and Technology
Figure 3 for Gemma: Open Models Based on Gemini Research and Technology
Figure 4 for Gemma: Open Models Based on Gemini Research and Technology
Viaarxiv icon

Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval

Add code
Nov 10, 2023
Figure 1 for Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval
Figure 2 for Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval
Figure 3 for Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval
Figure 4 for Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval
Viaarxiv icon

Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts

Add code
Oct 10, 2022
Figure 1 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Figure 2 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Figure 3 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Figure 4 for Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Viaarxiv icon

Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation

Add code
May 25, 2022
Figure 1 for Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
Figure 2 for Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
Figure 3 for Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
Figure 4 for Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
Viaarxiv icon

SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

Add code
Oct 15, 2021
Figure 1 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Figure 2 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Figure 3 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Figure 4 for SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Viaarxiv icon

A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations

Add code
Sep 10, 2021
Figure 1 for A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations
Figure 2 for A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations
Figure 3 for A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations
Figure 4 for A Simple and Effective Method To Eliminate the Self Language Bias in Multilingual Representations
Viaarxiv icon

Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models

Add code
Aug 26, 2021
Figure 1 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Figure 2 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Figure 3 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Figure 4 for Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Viaarxiv icon

NT5?! Training T5 to Perform Numerical Reasoning

Add code
Apr 15, 2021
Figure 1 for NT5?! Training T5 to Perform Numerical Reasoning
Figure 2 for NT5?! Training T5 to Perform Numerical Reasoning
Figure 3 for NT5?! Training T5 to Perform Numerical Reasoning
Figure 4 for NT5?! Training T5 to Perform Numerical Reasoning
Viaarxiv icon