Picture for Sarath Chandar

Sarath Chandar

LLMs Can't Play Hangman: On the Necessity of a Private Working Memory for Language Agents

Add code
Jan 11, 2026
Viaarxiv icon

Investigating the Multilingual Calibration Effects of Language Model Instruction-Tuning

Add code
Jan 04, 2026
Viaarxiv icon

Effect of Document Packing on the Latent Multi-Hop Reasoning Capabilities of Large Language Models

Add code
Dec 16, 2025
Figure 1 for Effect of Document Packing on the Latent Multi-Hop Reasoning Capabilities of Large Language Models
Figure 2 for Effect of Document Packing on the Latent Multi-Hop Reasoning Capabilities of Large Language Models
Figure 3 for Effect of Document Packing on the Latent Multi-Hop Reasoning Capabilities of Large Language Models
Figure 4 for Effect of Document Packing on the Latent Multi-Hop Reasoning Capabilities of Large Language Models
Viaarxiv icon

Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation

Add code
Oct 05, 2025
Figure 1 for Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation
Figure 2 for Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation
Figure 3 for Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation
Figure 4 for Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation
Viaarxiv icon

Parity Requires Unified Input Dependence and Negative Eigenvalues in SSMs

Add code
Aug 10, 2025
Viaarxiv icon

Optimizers Qualitatively Alter Solutions And We Should Leverage This

Add code
Jul 16, 2025
Viaarxiv icon

Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models

Add code
Jun 12, 2025
Figure 1 for Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
Figure 2 for Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
Figure 3 for Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
Figure 4 for Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
Viaarxiv icon

V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning

Add code
Jun 11, 2025
Viaarxiv icon

Boosting LLM Reasoning via Spontaneous Self-Correction

Add code
Jun 07, 2025
Figure 1 for Boosting LLM Reasoning via Spontaneous Self-Correction
Figure 2 for Boosting LLM Reasoning via Spontaneous Self-Correction
Figure 3 for Boosting LLM Reasoning via Spontaneous Self-Correction
Figure 4 for Boosting LLM Reasoning via Spontaneous Self-Correction
Viaarxiv icon

Monitoring morphometric drift in lifelong learning segmentation of the spinal cord

Add code
May 02, 2025
Viaarxiv icon