Picture for Matan Eyal

Matan Eyal

Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?

Add code
May 09, 2024
Figure 1 for Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Figure 2 for Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Figure 3 for Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Figure 4 for Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Viaarxiv icon

Unpacking Tokenization: Evaluating Text Compression and its Correlation with Model Performance

Add code
Mar 10, 2024
Figure 1 for Unpacking Tokenization: Evaluating Text Compression and its Correlation with Model Performance
Figure 2 for Unpacking Tokenization: Evaluating Text Compression and its Correlation with Model Performance
Figure 3 for Unpacking Tokenization: Evaluating Text Compression and its Correlation with Model Performance
Figure 4 for Unpacking Tokenization: Evaluating Text Compression and its Correlation with Model Performance
Viaarxiv icon

Breaking the Language Barrier: Can Direct Inference Outperform Pre-Translation in Multilingual LLM Applications?

Add code
Mar 04, 2024
Figure 1 for Breaking the Language Barrier: Can Direct Inference Outperform Pre-Translation in Multilingual LLM Applications?
Figure 2 for Breaking the Language Barrier: Can Direct Inference Outperform Pre-Translation in Multilingual LLM Applications?
Figure 3 for Breaking the Language Barrier: Can Direct Inference Outperform Pre-Translation in Multilingual LLM Applications?
Figure 4 for Breaking the Language Barrier: Can Direct Inference Outperform Pre-Translation in Multilingual LLM Applications?
Viaarxiv icon

The Hidden Space of Transformer Language Adapters

Add code
Feb 20, 2024
Figure 1 for The Hidden Space of Transformer Language Adapters
Figure 2 for The Hidden Space of Transformer Language Adapters
Figure 3 for The Hidden Space of Transformer Language Adapters
Figure 4 for The Hidden Space of Transformer Language Adapters
Viaarxiv icon

Multilingual Instruction Tuning With Just a Pinch of Multilinguality

Add code
Jan 08, 2024
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Multilingual Sequence-to-Sequence Models for Hebrew NLP

Add code
Dec 19, 2022
Figure 1 for Multilingual Sequence-to-Sequence Models for Hebrew NLP
Figure 2 for Multilingual Sequence-to-Sequence Models for Hebrew NLP
Figure 3 for Multilingual Sequence-to-Sequence Models for Hebrew NLP
Viaarxiv icon

Large Scale Substitution-based Word Sense Induction

Add code
Oct 14, 2021
Figure 1 for Large Scale Substitution-based Word Sense Induction
Figure 2 for Large Scale Substitution-based Word Sense Induction
Figure 3 for Large Scale Substitution-based Word Sense Induction
Figure 4 for Large Scale Substitution-based Word Sense Induction
Viaarxiv icon

Bootstrapping Relation Extractors using Syntactic Search by Examples

Add code
Feb 09, 2021
Figure 1 for Bootstrapping Relation Extractors using Syntactic Search by Examples
Figure 2 for Bootstrapping Relation Extractors using Syntactic Search by Examples
Figure 3 for Bootstrapping Relation Extractors using Syntactic Search by Examples
Figure 4 for Bootstrapping Relation Extractors using Syntactic Search by Examples
Viaarxiv icon

Interactive Extractive Search over Biomedical Corpora

Add code
Jun 07, 2020
Figure 1 for Interactive Extractive Search over Biomedical Corpora
Figure 2 for Interactive Extractive Search over Biomedical Corpora
Figure 3 for Interactive Extractive Search over Biomedical Corpora
Viaarxiv icon