Picture for Xiaowei Ren

Xiaowei Ren

NVIDIA Nemotron 3: Efficient and Open Intelligence

Add code
Dec 24, 2025
Viaarxiv icon

Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning

Add code
Dec 23, 2025
Viaarxiv icon

Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models

Add code
Apr 10, 2025
Figure 1 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Figure 2 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Figure 3 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Figure 4 for Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Viaarxiv icon

Training Video Foundation Models with NVIDIA NeMo

Add code
Mar 17, 2025
Viaarxiv icon

Cosmos World Foundation Model Platform for Physical AI

Add code
Jan 07, 2025
Figure 1 for Cosmos World Foundation Model Platform for Physical AI
Figure 2 for Cosmos World Foundation Model Platform for Physical AI
Figure 3 for Cosmos World Foundation Model Platform for Physical AI
Figure 4 for Cosmos World Foundation Model Platform for Physical AI
Viaarxiv icon

Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training

Add code
Sep 23, 2020
Figure 1 for Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training
Figure 2 for Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training
Figure 3 for Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training
Figure 4 for Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training
Viaarxiv icon