Picture for Sayan Layek

Sayan Layek

SafeInfer: Context Adaptive Decoding Time Safety Alignment for Large Language Models

Add code
Jun 18, 2024
Viaarxiv icon

Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations

Jun 17, 2024
Viaarxiv icon

Breaking Boundaries: Investigating the Effects of Model Editing on Cross-linguistic Performance

Add code
Jun 17, 2024
Viaarxiv icon

How (un)ethical are instruction-centric responses of LLMs? Unveiling the vulnerabilities of safety guardrails to harmful queries

Add code
Mar 04, 2024
Viaarxiv icon

Context Matters: Pushing the Boundaries of Open-Ended Answer Generation with Graph-Structured Knowledge Context

Add code
Jan 23, 2024
Viaarxiv icon

Sowing the Wind, Reaping the Whirlwind: The Impact of Editing Language Models

Add code
Jan 19, 2024
Viaarxiv icon

Redefining Developer Assistance: Through Large Language Models in Software Ecosystem

Dec 09, 2023
Viaarxiv icon