Picture for Zhanghao Wu

Zhanghao Wu

LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset

Add code
Sep 30, 2023
Figure 1 for LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
Figure 2 for LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
Figure 3 for LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
Figure 4 for LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation Dataset
Viaarxiv icon

Judging LLM-as-a-judge with MT-Bench and Chatbot Arena

Add code
Jun 09, 2023
Figure 1 for Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Figure 2 for Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Figure 3 for Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Figure 4 for Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Viaarxiv icon

Representing Long-Range Context for Graph Neural Networks with Global Attention

Add code
Jan 21, 2022
Figure 1 for Representing Long-Range Context for Graph Neural Networks with Global Attention
Figure 2 for Representing Long-Range Context for Graph Neural Networks with Global Attention
Figure 3 for Representing Long-Range Context for Graph Neural Networks with Global Attention
Figure 4 for Representing Long-Range Context for Graph Neural Networks with Global Attention
Viaarxiv icon

Distributed Reinforcement Learning is a Dataflow Problem

Add code
Dec 03, 2020
Figure 1 for Distributed Reinforcement Learning is a Dataflow Problem
Figure 2 for Distributed Reinforcement Learning is a Dataflow Problem
Figure 3 for Distributed Reinforcement Learning is a Dataflow Problem
Figure 4 for Distributed Reinforcement Learning is a Dataflow Problem
Viaarxiv icon

HAT: Hardware-Aware Transformers for Efficient Natural Language Processing

Add code
May 28, 2020
Figure 1 for HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
Figure 2 for HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
Figure 3 for HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
Figure 4 for HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
Viaarxiv icon

Lite Transformer with Long-Short Range Attention

Add code
Apr 24, 2020
Figure 1 for Lite Transformer with Long-Short Range Attention
Figure 2 for Lite Transformer with Long-Short Range Attention
Figure 3 for Lite Transformer with Long-Short Range Attention
Figure 4 for Lite Transformer with Long-Short Range Attention
Viaarxiv icon