Picture for Liang Xiang

Liang Xiang

Seedance 1.5 pro: A Native Audio-Visual Joint Generation Foundation Model

Add code
Dec 23, 2025
Viaarxiv icon

Virtual Width Networks

Add code
Nov 17, 2025
Figure 1 for Virtual Width Networks
Figure 2 for Virtual Width Networks
Figure 3 for Virtual Width Networks
Figure 4 for Virtual Width Networks
Viaarxiv icon

Model Merging in Pre-training of Large Language Models

Add code
May 17, 2025
Viaarxiv icon

Seed1.5-VL Technical Report

Add code
May 11, 2025
Figure 1 for Seed1.5-VL Technical Report
Figure 2 for Seed1.5-VL Technical Report
Figure 3 for Seed1.5-VL Technical Report
Figure 4 for Seed1.5-VL Technical Report
Viaarxiv icon

Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving

Add code
Apr 03, 2025
Viaarxiv icon

FullStack Bench: Evaluating LLMs as Full Stack Coders

Add code
Dec 03, 2024
Figure 1 for FullStack Bench: Evaluating LLMs as Full Stack Coders
Figure 2 for FullStack Bench: Evaluating LLMs as Full Stack Coders
Figure 3 for FullStack Bench: Evaluating LLMs as Full Stack Coders
Figure 4 for FullStack Bench: Evaluating LLMs as Full Stack Coders
Viaarxiv icon

Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models

Add code
Jun 14, 2024
Figure 1 for Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models
Figure 2 for Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models
Figure 3 for Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models
Figure 4 for Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models
Viaarxiv icon

BAMBOO: a predictive and transferable machine learning force field framework for liquid electrolyte development

Add code
Apr 12, 2024
Viaarxiv icon

MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs

Add code
Feb 23, 2024
Figure 1 for MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
Figure 2 for MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
Figure 3 for MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
Figure 4 for MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
Viaarxiv icon

Learning Regularized Positional Encoding for Molecular Prediction

Add code
Nov 23, 2022
Figure 1 for Learning Regularized Positional Encoding for Molecular Prediction
Figure 2 for Learning Regularized Positional Encoding for Molecular Prediction
Figure 3 for Learning Regularized Positional Encoding for Molecular Prediction
Figure 4 for Learning Regularized Positional Encoding for Molecular Prediction
Viaarxiv icon