Picture for Aditi Raghunathan

Aditi Raghunathan

Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction

Add code
Apr 21, 2025
Viaarxiv icon

Weight Ensembling Improves Reasoning in Language Models

Add code
Apr 15, 2025
Viaarxiv icon

Exact Unlearning of Finetuning Data via Model Merging at Scale

Add code
Apr 06, 2025
Viaarxiv icon

Overtrained Language Models Are Harder to Fine-Tune

Add code
Mar 24, 2025
Viaarxiv icon

Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions

Add code
Mar 05, 2025
Viaarxiv icon

Mitigating Bias in RAG: Controlling the Embedder

Add code
Feb 24, 2025
Viaarxiv icon

Vulnerability of Text-Matching in ML/AI Conference Reviewer Assignments to Collusions

Add code
Dec 09, 2024
Viaarxiv icon

Scaling Laws for Precision

Add code
Nov 07, 2024
Viaarxiv icon

Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance

Add code
Oct 14, 2024
Figure 1 for Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
Figure 2 for Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
Figure 3 for Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
Figure 4 for Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
Viaarxiv icon

Adversarial Attacks on Multimodal Agents

Add code
Jun 18, 2024
Viaarxiv icon