1.4 Million Open-Source Distilled Reasoning Dataset to Empower Large Language Model Training

Add code
Mar 25, 2025
Figure 1 for 1.4 Million Open-Source Distilled Reasoning Dataset to Empower Large Language Model Training
Figure 2 for 1.4 Million Open-Source Distilled Reasoning Dataset to Empower Large Language Model Training
Figure 3 for 1.4 Million Open-Source Distilled Reasoning Dataset to Empower Large Language Model Training
Figure 4 for 1.4 Million Open-Source Distilled Reasoning Dataset to Empower Large Language Model Training

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: