Picture for Abhilasha Ravichander

Abhilasha Ravichander

Fractional Rotation, Full Potential? Investigating Performance and Convergence of Partial RoPE

Add code
Mar 12, 2026
Viaarxiv icon

In Agents We Trust, but Who Do Agents Trust? Latent Source Preferences Steer LLM Generations

Add code
Feb 17, 2026
Viaarxiv icon

Model State Arithmetic for Machine Unlearning

Add code
Jun 26, 2025
Figure 1 for Model State Arithmetic for Machine Unlearning
Figure 2 for Model State Arithmetic for Machine Unlearning
Figure 3 for Model State Arithmetic for Machine Unlearning
Figure 4 for Model State Arithmetic for Machine Unlearning
Viaarxiv icon

What Has Been Lost with Synthetic Evaluation?

Add code
May 28, 2025
Viaarxiv icon

Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations

Add code
Apr 17, 2025
Figure 1 for Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Figure 2 for Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Figure 3 for Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Figure 4 for Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Viaarxiv icon

Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models

Add code
Mar 15, 2025
Figure 1 for Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models
Figure 2 for Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models
Figure 3 for Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models
Figure 4 for Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models
Viaarxiv icon

HALoGEN: Fantastic LLM Hallucinations and Where to Find Them

Add code
Jan 14, 2025
Figure 1 for HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
Figure 2 for HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
Figure 3 for HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
Figure 4 for HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
Viaarxiv icon

RESTOR: Knowledge Recovery through Machine Unlearning

Add code
Oct 31, 2024
Figure 1 for RESTOR: Knowledge Recovery through Machine Unlearning
Figure 2 for RESTOR: Knowledge Recovery through Machine Unlearning
Figure 3 for RESTOR: Knowledge Recovery through Machine Unlearning
Figure 4 for RESTOR: Knowledge Recovery through Machine Unlearning
Viaarxiv icon

Reverse Question Answering: Can an LLM Write a Question so Hard (or Bad) that it Can't Answer?

Add code
Oct 20, 2024
Viaarxiv icon

WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries

Add code
Jul 24, 2024
Figure 1 for WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries
Figure 2 for WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries
Figure 3 for WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries
Figure 4 for WildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries
Viaarxiv icon