Alert button
Picture for Joseph E. Gonzalez

Joseph E. Gonzalez

Alert button

Self-correcting LLM-controlled Diffusion Models

Add code
Bookmark button
Alert button
Nov 27, 2023
Tsung-Han Wu, Long Lian, Joseph E. Gonzalez, Boyi Li, Trevor Darrell

Viaarxiv icon

LLM-Assisted Code Cleaning For Training Accurate Code Generators

Add code
Bookmark button
Alert button
Nov 25, 2023
Naman Jain, Tianjun Zhang, Wei-Lin Chiang, Joseph E. Gonzalez, Koushik Sen, Ion Stoica

Viaarxiv icon

Rethinking Benchmark and Contamination for Language Models with Rephrased Samples

Add code
Bookmark button
Alert button
Nov 11, 2023
Shuo Yang, Wei-Lin Chiang, Lianmin Zheng, Joseph E. Gonzalez, Ion Stoica

Viaarxiv icon

S-LoRA: Serving Thousands of Concurrent LoRA Adapters

Add code
Bookmark button
Alert button
Nov 07, 2023
Ying Sheng, Shiyi Cao, Dacheng Li, Coleman Hooper, Nicholas Lee, Shuo Yang, Christopher Chou, Banghua Zhu, Lianmin Zheng, Kurt Keutzer, Joseph E. Gonzalez, Ion Stoica

Viaarxiv icon

Investigating the Behavior of Diffusion Models for Accelerating Electronic Structure Calculations

Add code
Bookmark button
Alert button
Nov 02, 2023
Daniel Rothchild, Andrew S. Rosen, Eric Taw, Connie Robinson, Joseph E. Gonzalez, Aditi S. Krishnapriyan

Viaarxiv icon

CLAIR: Evaluating Image Captions with Large Language Models

Add code
Bookmark button
Alert button
Oct 19, 2023
David Chan, Suzanne Petryk, Joseph E. Gonzalez, Trevor Darrell, John Canny

Viaarxiv icon

MemGPT: Towards LLMs as Operating Systems

Add code
Bookmark button
Alert button
Oct 12, 2023
Charles Packer, Vivian Fang, Shishir G. Patil, Kevin Lin, Sarah Wooders, Joseph E. Gonzalez

Viaarxiv icon

LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers

Add code
Bookmark button
Alert button
Oct 05, 2023
Dacheng Li, Rulin Shao, Anze Xie, Eric P. Xing, Joseph E. Gonzalez, Ion Stoica, Xuezhe Ma, Hao Zhang

Figure 1 for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
Figure 2 for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
Figure 3 for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
Figure 4 for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers
Viaarxiv icon

LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset

Add code
Bookmark button
Alert button
Sep 30, 2023
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric. P Xing, Joseph E. Gonzalez, Ion Stoica, Hao Zhang

Figure 1 for LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
Figure 2 for LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
Figure 3 for LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
Figure 4 for LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
Viaarxiv icon