Alert button
Picture for Neeraj Varshney

Neeraj Varshney

Alert button

The Art of Defending: A Systematic Evaluation and Analysis of LLM Defense Strategies on Safety and Over-Defensiveness

Add code
Bookmark button
Alert button
Dec 30, 2023
Neeraj Varshney, Pavel Dolin, Agastya Seth, Chitta Baral

Viaarxiv icon

Accelerating LLaMA Inference by Enabling Intermediate Layer Decoding via Instruction Tuning with LITE

Add code
Bookmark button
Alert button
Nov 07, 2023
Neeraj Varshney, Agneet Chatterjee, Mihir Parmar, Chitta Baral

Viaarxiv icon

Accelerating LLM Inference by Enabling Intermediate Layer Decoding

Add code
Bookmark button
Alert button
Oct 28, 2023
Neeraj Varshney, Agneet Chatterjee, Mihir Parmar, Chitta Baral

Viaarxiv icon

Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models

Add code
Bookmark button
Alert button
Oct 02, 2023
Man Luo, Shrinidhi Kumbhar, Ming shen, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, Somak Aditya, Chitta Baral

Figure 1 for Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models
Figure 2 for Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models
Figure 3 for Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models
Figure 4 for Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models
Viaarxiv icon

Can NLP Models 'Identify', 'Distinguish', and 'Justify' Questions that Don't have a Definitive Answer?

Add code
Bookmark button
Alert button
Sep 08, 2023
Ayushi Agarwal, Nisarg Patel, Neeraj Varshney, Mihir Parmar, Pavan Mallina, Aryan Bhavin Shah, Srihari Raju Sangaraju, Tirth Patel, Nihar Thakkar, Chitta Baral

Figure 1 for Can NLP Models 'Identify', 'Distinguish', and 'Justify' Questions that Don't have a Definitive Answer?
Figure 2 for Can NLP Models 'Identify', 'Distinguish', and 'Justify' Questions that Don't have a Definitive Answer?
Figure 3 for Can NLP Models 'Identify', 'Distinguish', and 'Justify' Questions that Don't have a Definitive Answer?
Figure 4 for Can NLP Models 'Identify', 'Distinguish', and 'Justify' Questions that Don't have a Definitive Answer?
Viaarxiv icon

A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation

Add code
Bookmark button
Alert button
Jul 08, 2023
Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, Dong Yu

Figure 1 for A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation
Figure 2 for A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation
Figure 3 for A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation
Figure 4 for A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation
Viaarxiv icon

Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?

Add code
Bookmark button
Alert button
May 20, 2023
Neeraj Varshney, Mihir Parmar, Nisarg Patel, Divij Handa, Sayantan Sarkar, Man Luo, Chitta Baral

Figure 1 for Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?
Figure 2 for Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?
Figure 3 for Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?
Figure 4 for Can NLP Models Correctly Reason Over Contexts that Break the Common Assumptions?
Viaarxiv icon

A Unified Evaluation Framework for Novelty Detection and Accommodation in NLP with an Instantiation in Authorship Attribution

Add code
Bookmark button
Alert button
May 08, 2023
Neeraj Varshney, Himanshu Gupta, Eric Robertson, Bing Liu, Chitta Baral

Figure 1 for A Unified Evaluation Framework for Novelty Detection and Accommodation in NLP with an Instantiation in Authorship Attribution
Figure 2 for A Unified Evaluation Framework for Novelty Detection and Accommodation in NLP with an Instantiation in Authorship Attribution
Figure 3 for A Unified Evaluation Framework for Novelty Detection and Accommodation in NLP with an Instantiation in Authorship Attribution
Figure 4 for A Unified Evaluation Framework for Novelty Detection and Accommodation in NLP with an Instantiation in Authorship Attribution
Viaarxiv icon

Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances in QA

Add code
Bookmark button
Alert button
May 02, 2023
Neeraj Varshney, Chitta Baral

Figure 1 for Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances in QA
Figure 2 for Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances in QA
Figure 3 for Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances in QA
Figure 4 for Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances in QA
Viaarxiv icon

Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments

Add code
Bookmark button
Alert button
Mar 06, 2023
Tung Thai, Ming Shen, Mayank Garg, Ayush Kalani, Nakul Vaidya, Utkarsh Soni, Mudit Verma, Sriram Gopalakrishnan, Neeraj Varshney, Chitta Baral, Subbarao Kambhampati, Jivko Sinapov, Matthias Scheutz

Figure 1 for Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments
Figure 2 for Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments
Figure 3 for Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments
Figure 4 for Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments
Viaarxiv icon