Picture for Trishna Chakraborty

Trishna Chakraborty

Toward Autonomous Laboratory Safety Monitoring with Vision Language Models: Learning to See Hazards Through Scene Structure

Add code
Jan 31, 2026
Viaarxiv icon

HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models

Add code
Jun 18, 2025
Figure 1 for HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models
Figure 2 for HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models
Figure 3 for HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models
Figure 4 for HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models
Viaarxiv icon

Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models

Add code
Nov 06, 2024
Figure 1 for Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models
Figure 2 for Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models
Figure 3 for Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models
Figure 4 for Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models
Viaarxiv icon

Cross-Modal Safety Alignment: Is textual unlearning all you need?

Add code
May 27, 2024
Figure 1 for Cross-Modal Safety Alignment: Is textual unlearning all you need?
Figure 2 for Cross-Modal Safety Alignment: Is textual unlearning all you need?
Figure 3 for Cross-Modal Safety Alignment: Is textual unlearning all you need?
Figure 4 for Cross-Modal Safety Alignment: Is textual unlearning all you need?
Viaarxiv icon