Picture for Yuzhen Huang

Yuzhen Huang

Compression Represents Intelligence Linearly

Add code
Apr 15, 2024
Figure 1 for Compression Represents Intelligence Linearly
Figure 2 for Compression Represents Intelligence Linearly
Figure 3 for Compression Represents Intelligence Linearly
Figure 4 for Compression Represents Intelligence Linearly
Viaarxiv icon

C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models

Add code
May 17, 2023
Figure 1 for C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models
Figure 2 for C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models
Figure 3 for C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models
Figure 4 for C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models
Viaarxiv icon

Pre-train and Search: Efficient Embedding Table Sharding with Pre-trained Neural Cost Models

Add code
May 03, 2023
Figure 1 for Pre-train and Search: Efficient Embedding Table Sharding with Pre-trained Neural Cost Models
Figure 2 for Pre-train and Search: Efficient Embedding Table Sharding with Pre-trained Neural Cost Models
Figure 3 for Pre-train and Search: Efficient Embedding Table Sharding with Pre-trained Neural Cost Models
Figure 4 for Pre-train and Search: Efficient Embedding Table Sharding with Pre-trained Neural Cost Models
Viaarxiv icon

Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism

Add code
Oct 18, 2020
Figure 1 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Figure 2 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Figure 3 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Figure 4 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Viaarxiv icon

TensorOpt: Exploring the Tradeoffs in Distributed DNN Training with Auto-Parallelism

Add code
Apr 16, 2020
Figure 1 for TensorOpt: Exploring the Tradeoffs in Distributed DNN Training with Auto-Parallelism
Figure 2 for TensorOpt: Exploring the Tradeoffs in Distributed DNN Training with Auto-Parallelism
Figure 3 for TensorOpt: Exploring the Tradeoffs in Distributed DNN Training with Auto-Parallelism
Figure 4 for TensorOpt: Exploring the Tradeoffs in Distributed DNN Training with Auto-Parallelism
Viaarxiv icon