Picture for Ori Ram

Ori Ram

DRAGged into Conflicts: Detecting and Addressing Conflicting Sources in Search-Augmented LLMs

Add code
Jun 10, 2025
Viaarxiv icon

Making Retrieval-Augmented Language Models Robust to Irrelevant Context

Add code
Oct 02, 2023
Viaarxiv icon

Generating Benchmarks for Factuality Evaluation of Language Models

Add code
Jul 13, 2023
Figure 1 for Generating Benchmarks for Factuality Evaluation of Language Models
Figure 2 for Generating Benchmarks for Factuality Evaluation of Language Models
Figure 3 for Generating Benchmarks for Factuality Evaluation of Language Models
Figure 4 for Generating Benchmarks for Factuality Evaluation of Language Models
Viaarxiv icon

In-Context Retrieval-Augmented Language Models

Add code
Jan 31, 2023
Viaarxiv icon

Parallel Context Windows Improve In-Context Learning of Large Language Models

Add code
Dec 21, 2022
Figure 1 for Parallel Context Windows Improve In-Context Learning of Large Language Models
Figure 2 for Parallel Context Windows Improve In-Context Learning of Large Language Models
Figure 3 for Parallel Context Windows Improve In-Context Learning of Large Language Models
Figure 4 for Parallel Context Windows Improve In-Context Learning of Large Language Models
Viaarxiv icon

What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary

Add code
Dec 20, 2022
Figure 1 for What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary
Figure 2 for What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary
Figure 3 for What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary
Figure 4 for What Are You Token About? Dense Retrieval as Distributions Over the Vocabulary
Viaarxiv icon

Standing on the Shoulders of Giant Frozen Language Models

Add code
Apr 21, 2022
Figure 1 for Standing on the Shoulders of Giant Frozen Language Models
Figure 2 for Standing on the Shoulders of Giant Frozen Language Models
Figure 3 for Standing on the Shoulders of Giant Frozen Language Models
Figure 4 for Standing on the Shoulders of Giant Frozen Language Models
Viaarxiv icon

Transformer Language Models without Positional Encodings Still Learn Positional Information

Add code
Mar 30, 2022
Figure 1 for Transformer Language Models without Positional Encodings Still Learn Positional Information
Figure 2 for Transformer Language Models without Positional Encodings Still Learn Positional Information
Figure 3 for Transformer Language Models without Positional Encodings Still Learn Positional Information
Figure 4 for Transformer Language Models without Positional Encodings Still Learn Positional Information
Viaarxiv icon

Learning to Retrieve Passages without Supervision

Add code
Dec 14, 2021
Figure 1 for Learning to Retrieve Passages without Supervision
Figure 2 for Learning to Retrieve Passages without Supervision
Figure 3 for Learning to Retrieve Passages without Supervision
Figure 4 for Learning to Retrieve Passages without Supervision
Viaarxiv icon

How Optimal is Greedy Decoding for Extractive Question Answering?

Add code
Aug 12, 2021
Figure 1 for How Optimal is Greedy Decoding for Extractive Question Answering?
Figure 2 for How Optimal is Greedy Decoding for Extractive Question Answering?
Figure 3 for How Optimal is Greedy Decoding for Extractive Question Answering?
Figure 4 for How Optimal is Greedy Decoding for Extractive Question Answering?
Viaarxiv icon