Picture for June Yang

June Yang

NVIDIA

Scalable Training of Mixture-of-Experts Models with Megatron Core

Add code
Mar 10, 2026
Viaarxiv icon

MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core

Add code
Apr 21, 2025
Figure 1 for MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core
Figure 2 for MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core
Figure 3 for MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core
Figure 4 for MoE Parallel Folding: Heterogeneous Parallelism Mappings for Efficient Large-Scale MoE Model Training with Megatron Core
Viaarxiv icon

Llama 3 Meets MoE: Efficient Upcycling

Add code
Dec 13, 2024
Figure 1 for Llama 3 Meets MoE: Efficient Upcycling
Figure 2 for Llama 3 Meets MoE: Efficient Upcycling
Figure 3 for Llama 3 Meets MoE: Efficient Upcycling
Figure 4 for Llama 3 Meets MoE: Efficient Upcycling
Viaarxiv icon

Aligning Language Models with Offline Reinforcement Learning from Human Feedback

Add code
Aug 23, 2023
Viaarxiv icon