Picture for Manas Gaur

Manas Gaur

University of Maryland, Baltimore County

Beyond Memorization: Testing LLM Reasoning on Unseen Theory of Computation Tasks

Add code
Jan 19, 2026
Viaarxiv icon

Neurosymbolic Retrievers for Retrieval-augmented Generation

Add code
Jan 08, 2026
Viaarxiv icon

SymLoc: Symbolic Localization of Hallucination across HaluEval and TruthfulQA

Add code
Nov 18, 2025
Figure 1 for SymLoc: Symbolic Localization of Hallucination across HaluEval and TruthfulQA
Figure 2 for SymLoc: Symbolic Localization of Hallucination across HaluEval and TruthfulQA
Figure 3 for SymLoc: Symbolic Localization of Hallucination across HaluEval and TruthfulQA
Figure 4 for SymLoc: Symbolic Localization of Hallucination across HaluEval and TruthfulQA
Viaarxiv icon

Side Effects of Erasing Concepts from Diffusion Models

Add code
Aug 20, 2025
Figure 1 for Side Effects of Erasing Concepts from Diffusion Models
Figure 2 for Side Effects of Erasing Concepts from Diffusion Models
Figure 3 for Side Effects of Erasing Concepts from Diffusion Models
Figure 4 for Side Effects of Erasing Concepts from Diffusion Models
Viaarxiv icon

If Pigs Could Fly... Can LLMs Logically Reason Through Counterfactuals?

Add code
May 28, 2025
Viaarxiv icon

Ranking Free RAG: Replacing Re-ranking with Selection in RAG for Sensitive Domains

Add code
May 21, 2025
Viaarxiv icon

From Guessing to Asking: An Approach to Resolving the Persona Knowledge Gap in LLMs during Multi-Turn Conversations

Add code
Mar 16, 2025
Viaarxiv icon

Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation

Add code
Dec 24, 2024
Figure 1 for Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation
Figure 2 for Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation
Figure 3 for Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation
Figure 4 for Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation
Viaarxiv icon

Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context

Add code
Dec 20, 2024
Figure 1 for Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Figure 2 for Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Figure 3 for Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Figure 4 for Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Viaarxiv icon

Towards Robust Evaluation of Unlearning in LLMs via Data Transformations

Add code
Nov 23, 2024
Figure 1 for Towards Robust Evaluation of Unlearning in LLMs via Data Transformations
Figure 2 for Towards Robust Evaluation of Unlearning in LLMs via Data Transformations
Figure 3 for Towards Robust Evaluation of Unlearning in LLMs via Data Transformations
Figure 4 for Towards Robust Evaluation of Unlearning in LLMs via Data Transformations
Viaarxiv icon