Picture for Keerthiram Murugesan

Keerthiram Murugesan

LongDA: Benchmarking LLM Agents for Long-Document Data Analysis

Add code
Jan 05, 2026
Viaarxiv icon

Patching LLM Like Software: A Lightweight Method for Improving Safety Policy in Large Language Models

Add code
Nov 11, 2025
Figure 1 for Patching LLM Like Software: A Lightweight Method for Improving Safety Policy in Large Language Models
Figure 2 for Patching LLM Like Software: A Lightweight Method for Improving Safety Policy in Large Language Models
Figure 3 for Patching LLM Like Software: A Lightweight Method for Improving Safety Policy in Large Language Models
Figure 4 for Patching LLM Like Software: A Lightweight Method for Improving Safety Policy in Large Language Models
Viaarxiv icon

Language Models Coupled with Metacognition Can Outperform Reasoning Models

Add code
Aug 25, 2025
Viaarxiv icon

Highlight All the Phrases: Enhancing LLM Transparency through Visual Factuality Indicators

Add code
Aug 09, 2025
Viaarxiv icon

AutoData: A Multi-Agent System for Open Web Data Collection

Add code
May 21, 2025
Figure 1 for AutoData: A Multi-Agent System for Open Web Data Collection
Figure 2 for AutoData: A Multi-Agent System for Open Web Data Collection
Figure 3 for AutoData: A Multi-Agent System for Open Web Data Collection
Figure 4 for AutoData: A Multi-Agent System for Open Web Data Collection
Viaarxiv icon

EfficientLLM: Efficiency in Large Language Models

Add code
May 20, 2025
Viaarxiv icon

PEEL the Layers and Find Yourself: Revisiting Inference-time Data Leakage for Residual Neural Networks

Add code
Apr 08, 2025
Viaarxiv icon

Cross-Examiner: Evaluating Consistency of Large Language Model-Generated Explanations

Add code
Mar 11, 2025
Viaarxiv icon

Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents

Add code
Feb 22, 2025
Figure 1 for Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents
Figure 2 for Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents
Figure 3 for Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents
Figure 4 for Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents
Viaarxiv icon

NGQA: A Nutritional Graph Question Answering Benchmark for Personalized Health-aware Nutritional Reasoning

Add code
Dec 20, 2024
Figure 1 for NGQA: A Nutritional Graph Question Answering Benchmark for Personalized Health-aware Nutritional Reasoning
Figure 2 for NGQA: A Nutritional Graph Question Answering Benchmark for Personalized Health-aware Nutritional Reasoning
Figure 3 for NGQA: A Nutritional Graph Question Answering Benchmark for Personalized Health-aware Nutritional Reasoning
Figure 4 for NGQA: A Nutritional Graph Question Answering Benchmark for Personalized Health-aware Nutritional Reasoning
Viaarxiv icon