Picture for Ashish Sabharwal

Ashish Sabharwal

Shammie

Language Model Planners do not Scale, but do Formalizers?

Add code
Mar 25, 2026
Viaarxiv icon

Why Are Linear RNNs More Parallelizable?

Add code
Mar 05, 2026
Viaarxiv icon

Olmo 3

Add code
Dec 15, 2025
Viaarxiv icon

AstaBench: Rigorous Benchmarking of AI Agents with a Scientific Research Suite

Add code
Oct 24, 2025
Viaarxiv icon

Leveraging In-Context Learning for Language Model Agents

Add code
Jun 16, 2025
Viaarxiv icon

Exact Expressive Power of Transformers with Padding

Add code
May 25, 2025
Viaarxiv icon

A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers

Add code
Mar 05, 2025
Viaarxiv icon

ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning

Add code
Feb 03, 2025
Figure 1 for ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Figure 2 for ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Figure 3 for ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Figure 4 for ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Viaarxiv icon

Understanding the Logic of Direct Preference Alignment through Logic

Add code
Dec 23, 2024
Figure 1 for Understanding the Logic of Direct Preference Alignment through Logic
Figure 2 for Understanding the Logic of Direct Preference Alignment through Logic
Figure 3 for Understanding the Logic of Direct Preference Alignment through Logic
Figure 4 for Understanding the Logic of Direct Preference Alignment through Logic
Viaarxiv icon

SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories

Add code
Sep 11, 2024
Viaarxiv icon