Picture for Bhavik Chandna

Bhavik Chandna

Dissecting Bias in LLMs: A Mechanistic Interpretability Perspective

Add code
Jun 06, 2025
Viaarxiv icon

ExtremeAIGC: Benchmarking LMM Vulnerability to AI-Generated Extremist Content

Add code
Mar 13, 2025
Figure 1 for ExtremeAIGC: Benchmarking LMM Vulnerability to AI-Generated Extremist Content
Figure 2 for ExtremeAIGC: Benchmarking LMM Vulnerability to AI-Generated Extremist Content
Figure 3 for ExtremeAIGC: Benchmarking LMM Vulnerability to AI-Generated Extremist Content
Figure 4 for ExtremeAIGC: Benchmarking LMM Vulnerability to AI-Generated Extremist Content
Viaarxiv icon

A Counterfactual Explanation Framework for Retrieval Models

Add code
Sep 01, 2024
Figure 1 for A Counterfactual Explanation Framework for Retrieval Models
Figure 2 for A Counterfactual Explanation Framework for Retrieval Models
Figure 3 for A Counterfactual Explanation Framework for Retrieval Models
Figure 4 for A Counterfactual Explanation Framework for Retrieval Models
Viaarxiv icon