Picture for Divij Handa

Divij Handa

ThinkTuning: Instilling Cognitive Reflections without Distillation

Add code
Aug 11, 2025
Figure 1 for ThinkTuning: Instilling Cognitive Reflections without Distillation
Figure 2 for ThinkTuning: Instilling Cognitive Reflections without Distillation
Figure 3 for ThinkTuning: Instilling Cognitive Reflections without Distillation
Figure 4 for ThinkTuning: Instilling Cognitive Reflections without Distillation
Viaarxiv icon

UnSeenTimeQA: Time-Sensitive Question-Answering Beyond LLMs' Memorization

Add code
Jul 03, 2024
Figure 1 for UnSeenTimeQA: Time-Sensitive Question-Answering Beyond LLMs' Memorization
Figure 2 for UnSeenTimeQA: Time-Sensitive Question-Answering Beyond LLMs' Memorization
Figure 3 for UnSeenTimeQA: Time-Sensitive Question-Answering Beyond LLMs' Memorization
Figure 4 for UnSeenTimeQA: Time-Sensitive Question-Answering Beyond LLMs' Memorization
Viaarxiv icon

ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints

Add code
Jun 06, 2024
Figure 1 for ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints
Figure 2 for ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints
Figure 3 for ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints
Figure 4 for ActionReasoningBench: Reasoning about Actions with and without Ramification Constraints
Viaarxiv icon

Jailbreaking Proprietary Large Language Models using Word Substitution Cipher

Add code
Feb 16, 2024
Figure 1 for Jailbreaking Proprietary Large Language Models using Word Substitution Cipher
Figure 2 for Jailbreaking Proprietary Large Language Models using Word Substitution Cipher
Figure 3 for Jailbreaking Proprietary Large Language Models using Word Substitution Cipher
Figure 4 for Jailbreaking Proprietary Large Language Models using Word Substitution Cipher
Viaarxiv icon

Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?

Add code
May 20, 2023
Figure 1 for Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?
Figure 2 for Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?
Figure 3 for Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?
Figure 4 for Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?
Viaarxiv icon