Picture for Mostofa Patwary

Mostofa Patwary

Celine

NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model

Add code
Aug 21, 2025
Viaarxiv icon

Nemotron-CC-Math: A 133 Billion-Token-Scale High Quality Math Pretraining Dataset

Add code
Aug 20, 2025
Viaarxiv icon

Llama-Nemotron: Efficient Reasoning Models

Add code
May 02, 2025
Viaarxiv icon

CLIMB: CLustering-based Iterative Data Mixture Bootstrapping for Language Model Pre-training

Add code
Apr 17, 2025
Viaarxiv icon

Efficient Hybrid Language Model Compression through Group-Aware SSM Pruning

Add code
Apr 15, 2025
Viaarxiv icon

NEMOTRON-CROSSTHINK: Scaling Self-Learning beyond Math Reasoning

Add code
Apr 15, 2025
Viaarxiv icon

Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models

Add code
Apr 10, 2025
Viaarxiv icon

Retro-Search: Exploring Untaken Paths for Deeper and Efficient Reasoning

Add code
Apr 06, 2025
Figure 1 for Retro-Search: Exploring Untaken Paths for Deeper and Efficient Reasoning
Figure 2 for Retro-Search: Exploring Untaken Paths for Deeper and Efficient Reasoning
Figure 3 for Retro-Search: Exploring Untaken Paths for Deeper and Efficient Reasoning
Figure 4 for Retro-Search: Exploring Untaken Paths for Deeper and Efficient Reasoning
Viaarxiv icon

Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase Pretraining

Add code
Dec 18, 2024
Figure 1 for Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase Pretraining
Figure 2 for Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase Pretraining
Figure 3 for Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase Pretraining
Figure 4 for Maximize Your Data's Potential: Enhancing LLM Accuracy with Two-Phase Pretraining
Viaarxiv icon

Nemotron-CC: Transforming Common Crawl into a Refined Long-Horizon Pretraining Dataset

Add code
Dec 03, 2024
Viaarxiv icon