Picture for Nayeon Lee

Nayeon Lee

Shammie

Nemotron-Cascade: Scaling Cascaded Reinforcement Learning for General-Purpose Reasoning Models

Add code
Dec 15, 2025
Viaarxiv icon

NVIDIA Nemotron Nano V2 VL

Add code
Nov 07, 2025
Viaarxiv icon

Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models

Add code
Apr 10, 2025
Figure 1 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Figure 2 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Figure 3 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Figure 4 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Viaarxiv icon

Cosmos-Reason1: From Physical Common Sense To Embodied Reasoning

Add code
Mar 18, 2025
Viaarxiv icon

NVLM: Open Frontier-Class Multimodal LLMs

Add code
Sep 17, 2024
Figure 1 for NVLM: Open Frontier-Class Multimodal LLMs
Figure 2 for NVLM: Open Frontier-Class Multimodal LLMs
Figure 3 for NVLM: Open Frontier-Class Multimodal LLMs
Figure 4 for NVLM: Open Frontier-Class Multimodal LLMs
Viaarxiv icon

BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages

Add code
Jun 14, 2024
Figure 1 for BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
Figure 2 for BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
Figure 3 for BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
Figure 4 for BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
Viaarxiv icon

HyperCLOVA X Technical Report

Add code
Apr 13, 2024
Viaarxiv icon

Measuring Political Bias in Large Language Models: What Is Said and How It Is Said

Add code
Mar 27, 2024
Figure 1 for Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Figure 2 for Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Figure 3 for Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Figure 4 for Measuring Political Bias in Large Language Models: What Is Said and How It Is Said
Viaarxiv icon

Mitigating Framing Bias with Polarity Minimization Loss

Add code
Nov 03, 2023
Figure 1 for Mitigating Framing Bias with Polarity Minimization Loss
Figure 2 for Mitigating Framing Bias with Polarity Minimization Loss
Figure 3 for Mitigating Framing Bias with Polarity Minimization Loss
Figure 4 for Mitigating Framing Bias with Polarity Minimization Loss
Viaarxiv icon

Towards Mitigating Hallucination in Large Language Models via Self-Reflection

Add code
Oct 10, 2023
Figure 1 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 2 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 3 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 4 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Viaarxiv icon