Alert button
Picture for Dacheng Li

Dacheng Li

Alert button

Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference

Add code
Bookmark button
Alert button
Mar 07, 2024
Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E. Gonzalez, Ion Stoica

Figure 1 for Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference
Figure 2 for Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference
Figure 3 for Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference
Figure 4 for Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference
Viaarxiv icon

Fairness in Serving Large Language Models

Add code
Bookmark button
Alert button
Dec 31, 2023
Ying Sheng, Shiyi Cao, Dacheng Li, Banghua Zhu, Zhuohan Li, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica

Viaarxiv icon

S-LoRA: Serving Thousands of Concurrent LoRA Adapters

Add code
Bookmark button
Alert button
Nov 07, 2023
Ying Sheng, Shiyi Cao, Dacheng Li, Coleman Hooper, Nicholas Lee, Shuo Yang, Christopher Chou, Banghua Zhu, Lianmin Zheng, Kurt Keutzer, Joseph E. Gonzalez, Ion Stoica

Figure 1 for S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Figure 2 for S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Figure 3 for S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Figure 4 for S-LoRA: Serving Thousands of Concurrent LoRA Adapters
Viaarxiv icon

LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers

Add code
Bookmark button
Alert button
Oct 05, 2023
Dacheng Li, Rulin Shao, Anze Xie, Eric P. Xing, Joseph E. Gonzalez, Ion Stoica, Xuezhe Ma, Hao Zhang

Figure 1 for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
Figure 2 for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
Figure 3 for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
Figure 4 for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
Viaarxiv icon

Judging LLM-as-a-judge with MT-Bench and Chatbot Arena

Add code
Bookmark button
Alert button
Jun 09, 2023
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, Ion Stoica

Figure 1 for Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Figure 2 for Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Figure 3 for Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Figure 4 for Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Viaarxiv icon

Does compressing activations help model parallel training?

Add code
Bookmark button
Alert button
Jan 06, 2023
Song Bian, Dacheng Li, Hongyi Wang, Eric P. Xing, Shivaram Venkataraman

Figure 1 for Does compressing activations help model parallel training?
Figure 2 for Does compressing activations help model parallel training?
Figure 3 for Does compressing activations help model parallel training?
Figure 4 for Does compressing activations help model parallel training?
Viaarxiv icon

MPCFormer: fast, performant and private Transformer inference with MPC

Add code
Bookmark button
Alert button
Nov 02, 2022
Dacheng Li, Rulin Shao, Hongyi Wang, Han Guo, Eric P. Xing, Hao Zhang

Figure 1 for MPCFormer: fast, performant and private Transformer inference with MPC
Figure 2 for MPCFormer: fast, performant and private Transformer inference with MPC
Figure 3 for MPCFormer: fast, performant and private Transformer inference with MPC
Figure 4 for MPCFormer: fast, performant and private Transformer inference with MPC
Viaarxiv icon

AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness

Add code
Bookmark button
Alert button
Oct 13, 2022
Dacheng Li, Hongyi Wang, Eric Xing, Hao Zhang

Figure 1 for AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness
Figure 2 for AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness
Figure 3 for AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness
Figure 4 for AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness
Viaarxiv icon

Dual Contradistinctive Generative Autoencoder

Add code
Bookmark button
Alert button
Nov 19, 2020
Gaurav Parmar, Dacheng Li, Kwonjoon Lee, Zhuowen Tu

Figure 1 for Dual Contradistinctive Generative Autoencoder
Figure 2 for Dual Contradistinctive Generative Autoencoder
Figure 3 for Dual Contradistinctive Generative Autoencoder
Figure 4 for Dual Contradistinctive Generative Autoencoder
Viaarxiv icon