Alert button
Picture for Weng Lam Tam

Weng Lam Tam

Alert button

OAG-Bench: A Human-Curated Benchmark for Academic Graph Mining

Add code
Bookmark button
Alert button
Feb 24, 2024
Fanjin Zhang, Shijie Shi, Yifan Zhu, Bo Chen, Yukuo Cen, Jifan Yu, Yelin Chen, Lulu Wang, Qingfei Zhao, Yuqing Cheng, Tianyi Han, Yuwei An, Dan Zhang, Weng Lam Tam, Kun Cao, Yunhe Pang, Xinyu Guan, Huihui Yuan, Jian Song, Xiaoyan Li, Yuxiao Dong, Jie Tang

Viaarxiv icon

AlignBench: Benchmarking Chinese Alignment of Large Language Models

Add code
Bookmark button
Alert button
Dec 05, 2023
Xiao Liu, Xuanyu Lei, Shengyuan Wang, Yue Huang, Zhuoer Feng, Bosi Wen, Jiale Cheng, Pei Ke, Yifan Xu, Weng Lam Tam, Xiaohan Zhang, Lichao Sun, Hongning Wang, Jing Zhang, Minlie Huang, Yuxiao Dong, Jie Tang

Viaarxiv icon

GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model

Add code
Bookmark button
Alert button
Jun 11, 2023
Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Yang Yang, Hongyin Tang, Keqing He, Jiahao Liu, Jingang Wang, Shu Zhao, Peng Zhang, Jie Tang

Figure 1 for GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Figure 2 for GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Figure 3 for GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Figure 4 for GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Viaarxiv icon

Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method

Add code
Bookmark button
Alert button
Jun 11, 2023
Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Shu Zhao, Peng Zhang, Jie Tang

Figure 1 for Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method
Figure 2 for Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method
Figure 3 for Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method
Figure 4 for Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method
Viaarxiv icon

GLM-130B: An Open Bilingual Pre-trained Model

Add code
Bookmark button
Alert button
Oct 05, 2022
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Peng Zhang, Yuxiao Dong, Jie Tang

Figure 1 for GLM-130B: An Open Bilingual Pre-trained Model
Figure 2 for GLM-130B: An Open Bilingual Pre-trained Model
Figure 3 for GLM-130B: An Open Bilingual Pre-trained Model
Figure 4 for GLM-130B: An Open Bilingual Pre-trained Model
Viaarxiv icon

Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers

Add code
Bookmark button
Alert button
Jul 14, 2022
Weng Lam Tam, Xiao Liu, Kaixuan Ji, Lilong Xue, Xingjian Zhang, Yuxiao Dong, Jiahua Liu, Maodi Hu, Jie Tang

Figure 1 for Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
Figure 2 for Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
Figure 3 for Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
Figure 4 for Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers
Viaarxiv icon