Picture for Amitava Das

Amitava Das

Visual Hallucination: Definition, Quantification, and Prescriptive Remediations

Add code
Mar 31, 2024
Figure 1 for Visual Hallucination: Definition, Quantification, and Prescriptive Remediations
Figure 2 for Visual Hallucination: Definition, Quantification, and Prescriptive Remediations
Figure 3 for Visual Hallucination: Definition, Quantification, and Prescriptive Remediations
Figure 4 for Visual Hallucination: Definition, Quantification, and Prescriptive Remediations
Viaarxiv icon

FACTOID: FACtual enTailment fOr hallucInation Detection

Add code
Mar 28, 2024
Figure 1 for FACTOID: FACtual enTailment fOr hallucInation Detection
Figure 2 for FACTOID: FACtual enTailment fOr hallucInation Detection
Figure 3 for FACTOID: FACtual enTailment fOr hallucInation Detection
Figure 4 for FACTOID: FACtual enTailment fOr hallucInation Detection
Viaarxiv icon

"Sorry, Come Again?" Prompting -- Enhancing Comprehension and Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing

Add code
Mar 27, 2024
Figure 1 for "Sorry, Come Again?" Prompting -- Enhancing Comprehension and Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing
Figure 2 for "Sorry, Come Again?" Prompting -- Enhancing Comprehension and Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing
Figure 3 for "Sorry, Come Again?" Prompting -- Enhancing Comprehension and Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing
Figure 4 for "Sorry, Come Again?" Prompting -- Enhancing Comprehension and Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing
Viaarxiv icon

The What, Why, and How of Context Length Extension Techniques in Large Language Models -- A Detailed Survey

Add code
Jan 15, 2024
Viaarxiv icon

A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models

Add code
Jan 08, 2024
Viaarxiv icon

SEPSIS: I Can Catch Your Lies -- A New Paradigm for Deception Detection

Add code
Dec 01, 2023
Figure 1 for SEPSIS: I Can Catch Your Lies -- A New Paradigm for Deception Detection
Figure 2 for SEPSIS: I Can Catch Your Lies -- A New Paradigm for Deception Detection
Figure 3 for SEPSIS: I Can Catch Your Lies -- A New Paradigm for Deception Detection
Figure 4 for SEPSIS: I Can Catch Your Lies -- A New Paradigm for Deception Detection
Viaarxiv icon

Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as You May Think -- Introducing AI Detectability Index

Add code
Oct 24, 2023
Figure 1 for Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as You May Think -- Introducing AI Detectability Index
Figure 2 for Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as You May Think -- Introducing AI Detectability Index
Figure 3 for Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as You May Think -- Introducing AI Detectability Index
Figure 4 for Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as You May Think -- Introducing AI Detectability Index
Viaarxiv icon

Exploring the Relationship between Analogy Identification and Sentence Structure Encoding in Large Language Models

Add code
Oct 13, 2023
Figure 1 for Exploring the Relationship between Analogy Identification and Sentence Structure Encoding in Large Language Models
Figure 2 for Exploring the Relationship between Analogy Identification and Sentence Structure Encoding in Large Language Models
Figure 3 for Exploring the Relationship between Analogy Identification and Sentence Structure Encoding in Large Language Models
Figure 4 for Exploring the Relationship between Analogy Identification and Sentence Structure Encoding in Large Language Models
Viaarxiv icon

The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations

Add code
Oct 08, 2023
Figure 1 for The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations
Figure 2 for The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations
Figure 3 for The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations
Figure 4 for The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations
Viaarxiv icon

Exploring the Relationship between LLM Hallucinations and Prompt Linguistic Nuances: Readability, Formality, and Concreteness

Add code
Sep 20, 2023
Figure 1 for Exploring the Relationship between LLM Hallucinations and Prompt Linguistic Nuances: Readability, Formality, and Concreteness
Figure 2 for Exploring the Relationship between LLM Hallucinations and Prompt Linguistic Nuances: Readability, Formality, and Concreteness
Figure 3 for Exploring the Relationship between LLM Hallucinations and Prompt Linguistic Nuances: Readability, Formality, and Concreteness
Figure 4 for Exploring the Relationship between LLM Hallucinations and Prompt Linguistic Nuances: Readability, Formality, and Concreteness
Viaarxiv icon