Picture for Leon Derczynski

Leon Derczynski

Llama-Nemotron: Efficient Reasoning Models

Add code
May 02, 2025
Viaarxiv icon

Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models

Add code
Apr 10, 2025
Viaarxiv icon

NLP Security and Ethics, in the Wild

Add code
Apr 09, 2025
Figure 1 for NLP Security and Ethics, in the Wild
Figure 2 for NLP Security and Ethics, in the Wild
Figure 3 for NLP Security and Ethics, in the Wild
Figure 4 for NLP Security and Ethics, in the Wild
Viaarxiv icon

Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities

Add code
Jan 31, 2025
Figure 1 for Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities
Figure 2 for Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities
Figure 3 for Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities
Figure 4 for Importing Phantoms: Measuring LLM Package Hallucination Vulnerabilities
Viaarxiv icon

Nemotron-4 340B Technical Report

Add code
Jun 17, 2024
Figure 1 for Nemotron-4 340B Technical Report
Figure 2 for Nemotron-4 340B Technical Report
Figure 3 for Nemotron-4 340B Technical Report
Figure 4 for Nemotron-4 340B Technical Report
Viaarxiv icon

garak: A Framework for Security Probing Large Language Models

Add code
Jun 16, 2024
Figure 1 for garak: A Framework for Security Probing Large Language Models
Figure 2 for garak: A Framework for Security Probing Large Language Models
Figure 3 for garak: A Framework for Security Probing Large Language Models
Figure 4 for garak: A Framework for Security Probing Large Language Models
Viaarxiv icon

Introducing v0.5 of the AI Safety Benchmark from MLCommons

Add code
Apr 18, 2024
Figure 1 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 2 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 3 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 4 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Viaarxiv icon

Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming in the Wild

Add code
Nov 13, 2023
Figure 1 for Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming in the Wild
Figure 2 for Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming in the Wild
Figure 3 for Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming in the Wild
Figure 4 for Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming in the Wild
Viaarxiv icon

Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research

Add code
Jun 29, 2023
Figure 1 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 2 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 3 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 4 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Viaarxiv icon

Assessing Language Model Deployment with Risk Cards

Add code
Mar 31, 2023
Figure 1 for Assessing Language Model Deployment with Risk Cards
Figure 2 for Assessing Language Model Deployment with Risk Cards
Viaarxiv icon