Picture for Tomer Asida

Tomer Asida

LatentMoE: Toward Optimal Accuracy per FLOP and Parameter in Mixture of Experts

Add code
Jan 26, 2026
Viaarxiv icon

NVIDIA Nemotron 3: Efficient and Open Intelligence

Add code
Dec 24, 2025
Viaarxiv icon

Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning

Add code
Dec 23, 2025
Viaarxiv icon

NVIDIA Nemotron Nano V2 VL

Add code
Nov 07, 2025
Viaarxiv icon

NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model

Add code
Aug 21, 2025
Figure 1 for NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Figure 2 for NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Figure 3 for NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Figure 4 for NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Viaarxiv icon

Jamba-1.5: Hybrid Transformer-Mamba Models at Scale

Add code
Aug 22, 2024
Figure 1 for Jamba-1.5: Hybrid Transformer-Mamba Models at Scale
Figure 2 for Jamba-1.5: Hybrid Transformer-Mamba Models at Scale
Figure 3 for Jamba-1.5: Hybrid Transformer-Mamba Models at Scale
Figure 4 for Jamba-1.5: Hybrid Transformer-Mamba Models at Scale
Viaarxiv icon

Jamba: A Hybrid Transformer-Mamba Language Model

Add code
Mar 28, 2024
Figure 1 for Jamba: A Hybrid Transformer-Mamba Language Model
Figure 2 for Jamba: A Hybrid Transformer-Mamba Language Model
Figure 3 for Jamba: A Hybrid Transformer-Mamba Language Model
Figure 4 for Jamba: A Hybrid Transformer-Mamba Language Model
Viaarxiv icon