Picture for Somshubra Majumdar

Somshubra Majumdar

NVIDIA Nemotron 3: Efficient and Open Intelligence

Add code
Dec 24, 2025
Viaarxiv icon

Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning

Add code
Dec 23, 2025
Viaarxiv icon

Scaling Test-Time Compute to Achieve IOI Gold Medal with Open-Weight Models

Add code
Oct 16, 2025
Figure 1 for Scaling Test-Time Compute to Achieve IOI Gold Medal with Open-Weight Models
Figure 2 for Scaling Test-Time Compute to Achieve IOI Gold Medal with Open-Weight Models
Figure 3 for Scaling Test-Time Compute to Achieve IOI Gold Medal with Open-Weight Models
Figure 4 for Scaling Test-Time Compute to Achieve IOI Gold Medal with Open-Weight Models
Viaarxiv icon

Open ASR Leaderboard: Towards Reproducible and Transparent Multilingual and Long-Form Speech Recognition Evaluation

Add code
Oct 08, 2025
Viaarxiv icon

NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model

Add code
Aug 21, 2025
Figure 1 for NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Figure 2 for NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Figure 3 for NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Figure 4 for NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Viaarxiv icon

From Output to Evaluation: Does Raw Instruction-Tuned Code LLMs Output Suffice for Fill-in-the-Middle Code Generation?

Add code
May 24, 2025
Viaarxiv icon

Llama-Nemotron: Efficient Reasoning Models

Add code
May 02, 2025
Figure 1 for Llama-Nemotron: Efficient Reasoning Models
Figure 2 for Llama-Nemotron: Efficient Reasoning Models
Figure 3 for Llama-Nemotron: Efficient Reasoning Models
Figure 4 for Llama-Nemotron: Efficient Reasoning Models
Viaarxiv icon

SWAN-GPT: An Efficient and Scalable Approach for Long-Context Language Modeling

Add code
Apr 11, 2025
Viaarxiv icon

Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models

Add code
Apr 10, 2025
Figure 1 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Figure 2 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Figure 3 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Figure 4 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Viaarxiv icon

OpenCodeInstruct: A Large-scale Instruction Tuning Dataset for Code LLMs

Add code
Apr 05, 2025
Figure 1 for OpenCodeInstruct: A Large-scale Instruction Tuning Dataset for Code LLMs
Figure 2 for OpenCodeInstruct: A Large-scale Instruction Tuning Dataset for Code LLMs
Figure 3 for OpenCodeInstruct: A Large-scale Instruction Tuning Dataset for Code LLMs
Figure 4 for OpenCodeInstruct: A Large-scale Instruction Tuning Dataset for Code LLMs
Viaarxiv icon