Picture for Albert Ge

Albert Ge

SlopCodeBench: Benchmarking How Coding Agents Degrade Over Long-Horizon Iterative Tasks

Add code
Mar 25, 2026
Viaarxiv icon

R&B: Domain Regrouping and Data Mixture Balancing for Efficient Foundation Model Training

Add code
May 01, 2025
Figure 1 for R&B: Domain Regrouping and Data Mixture Balancing for Efficient Foundation Model Training
Figure 2 for R&B: Domain Regrouping and Data Mixture Balancing for Efficient Foundation Model Training
Figure 3 for R&B: Domain Regrouping and Data Mixture Balancing for Efficient Foundation Model Training
Figure 4 for R&B: Domain Regrouping and Data Mixture Balancing for Efficient Foundation Model Training
Viaarxiv icon

Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition

Add code
Oct 08, 2024
Figure 1 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 2 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 3 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Figure 4 for Everything Everywhere All at Once: LLMs can In-Context Learn Multiple Tasks in Superposition
Viaarxiv icon