Picture for Weng Lam Tam

Weng Lam Tam

ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools

Add code
Jun 18, 2024
Figure 1 for ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools
Figure 2 for ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools
Figure 3 for ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools
Figure 4 for ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools
Viaarxiv icon

OAG-Bench: A Human-Curated Benchmark for Academic Graph Mining

Add code
Feb 24, 2024
Viaarxiv icon

AlignBench: Benchmarking Chinese Alignment of Large Language Models

Add code
Dec 05, 2023
Figure 1 for AlignBench: Benchmarking Chinese Alignment of Large Language Models
Figure 2 for AlignBench: Benchmarking Chinese Alignment of Large Language Models
Figure 3 for AlignBench: Benchmarking Chinese Alignment of Large Language Models
Figure 4 for AlignBench: Benchmarking Chinese Alignment of Large Language Models
Viaarxiv icon

Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method

Add code
Jun 11, 2023
Figure 1 for Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method
Figure 2 for Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method
Figure 3 for Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method
Figure 4 for Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method
Viaarxiv icon

GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model

Add code
Jun 11, 2023
Figure 1 for GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Figure 2 for GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Figure 3 for GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Figure 4 for GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Viaarxiv icon

GLM-130B: An Open Bilingual Pre-trained Model

Add code
Oct 05, 2022
Figure 1 for GLM-130B: An Open Bilingual Pre-trained Model
Figure 2 for GLM-130B: An Open Bilingual Pre-trained Model
Figure 3 for GLM-130B: An Open Bilingual Pre-trained Model
Figure 4 for GLM-130B: An Open Bilingual Pre-trained Model
Viaarxiv icon

Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers

Add code
Jul 14, 2022
Figure 1 for Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
Figure 2 for Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
Figure 3 for Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
Figure 4 for Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
Viaarxiv icon