Picture for Dinesh Manocha

Dinesh Manocha

HALLUCINOGEN: A Benchmark for Evaluating Object Hallucination in Large Visual-Language Models

Add code
Dec 29, 2024
Viaarxiv icon

DAVE: Diverse Atomic Visual Elements Dataset with High Representation of Vulnerable Road Users in Complex and Unpredictable Environments

Add code
Dec 28, 2024
Figure 1 for DAVE: Diverse Atomic Visual Elements Dataset with High Representation of Vulnerable Road Users in Complex and Unpredictable Environments
Figure 2 for DAVE: Diverse Atomic Visual Elements Dataset with High Representation of Vulnerable Road Users in Complex and Unpredictable Environments
Figure 3 for DAVE: Diverse Atomic Visual Elements Dataset with High Representation of Vulnerable Road Users in Complex and Unpredictable Environments
Figure 4 for DAVE: Diverse Atomic Visual Elements Dataset with High Representation of Vulnerable Road Users in Complex and Unpredictable Environments
Viaarxiv icon

VisDoM: Multi-Document QA with Visually Rich Elements Using Multimodal Retrieval-Augmented Generation

Add code
Dec 14, 2024
Figure 1 for VisDoM: Multi-Document QA with Visually Rich Elements Using Multimodal Retrieval-Augmented Generation
Figure 2 for VisDoM: Multi-Document QA with Visually Rich Elements Using Multimodal Retrieval-Augmented Generation
Figure 3 for VisDoM: Multi-Document QA with Visually Rich Elements Using Multimodal Retrieval-Augmented Generation
Figure 4 for VisDoM: Multi-Document QA with Visually Rich Elements Using Multimodal Retrieval-Augmented Generation
Viaarxiv icon

SILA: Signal-to-Language Augmentation for Enhanced Control in Text-to-Audio Generation

Add code
Dec 13, 2024
Viaarxiv icon

PromptRefine: Enhancing Few-Shot Performance on Low-Resource Indic Languages with Example Selection from Related Example Banks

Add code
Dec 07, 2024
Figure 1 for PromptRefine: Enhancing Few-Shot Performance on Low-Resource Indic Languages with Example Selection from Related Example Banks
Figure 2 for PromptRefine: Enhancing Few-Shot Performance on Low-Resource Indic Languages with Example Selection from Related Example Banks
Figure 3 for PromptRefine: Enhancing Few-Shot Performance on Low-Resource Indic Languages with Example Selection from Related Example Banks
Figure 4 for PromptRefine: Enhancing Few-Shot Performance on Low-Resource Indic Languages with Example Selection from Related Example Banks
Viaarxiv icon

Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment

Add code
Nov 27, 2024
Figure 1 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Figure 2 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Figure 3 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Figure 4 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Viaarxiv icon

MMAU: A Massive Multi-Task Audio Understanding and Reasoning Benchmark

Add code
Oct 24, 2024
Figure 1 for MMAU: A Massive Multi-Task Audio Understanding and Reasoning Benchmark
Figure 2 for MMAU: A Massive Multi-Task Audio Understanding and Reasoning Benchmark
Figure 3 for MMAU: A Massive Multi-Task Audio Understanding and Reasoning Benchmark
Figure 4 for MMAU: A Massive Multi-Task Audio Understanding and Reasoning Benchmark
Viaarxiv icon

Do Audio-Language Models Understand Linguistic Variations?

Add code
Oct 21, 2024
Figure 1 for Do Audio-Language Models Understand Linguistic Variations?
Figure 2 for Do Audio-Language Models Understand Linguistic Variations?
Figure 3 for Do Audio-Language Models Understand Linguistic Variations?
Figure 4 for Do Audio-Language Models Understand Linguistic Variations?
Viaarxiv icon

DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding

Add code
Oct 21, 2024
Figure 1 for DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding
Figure 2 for DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding
Figure 3 for DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding
Figure 4 for DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding
Viaarxiv icon

PAT: Parameter-Free Audio-Text Aligner to Boost Zero-Shot Audio Classification

Add code
Oct 19, 2024
Figure 1 for PAT: Parameter-Free Audio-Text Aligner to Boost Zero-Shot Audio Classification
Figure 2 for PAT: Parameter-Free Audio-Text Aligner to Boost Zero-Shot Audio Classification
Figure 3 for PAT: Parameter-Free Audio-Text Aligner to Boost Zero-Shot Audio Classification
Figure 4 for PAT: Parameter-Free Audio-Text Aligner to Boost Zero-Shot Audio Classification
Viaarxiv icon