Alert button
Picture for Hailin Zhang

Hailin Zhang

Alert button

Retrieval-Augmented Generation for AI-Generated Content: A Survey

Add code
Bookmark button
Alert button
Feb 29, 2024
Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Bin Cui

Viaarxiv icon

CAFE: Towards Compact, Adaptive, and Fast Embedding for Large-scale Recommendation Models

Add code
Bookmark button
Alert button
Dec 06, 2023
Hailin Zhang, Zirui Liu, Boxuan Chen, Yikai Zhao, Tong Zhao, Tong Yang, Bin Cui

Figure 1 for CAFE: Towards Compact, Adaptive, and Fast Embedding for Large-scale Recommendation Models
Figure 2 for CAFE: Towards Compact, Adaptive, and Fast Embedding for Large-scale Recommendation Models
Figure 3 for CAFE: Towards Compact, Adaptive, and Fast Embedding for Large-scale Recommendation Models
Figure 4 for CAFE: Towards Compact, Adaptive, and Fast Embedding for Large-scale Recommendation Models
Viaarxiv icon

Experimental Analysis of Large-scale Learnable Vector Storage Compression

Add code
Bookmark button
Alert button
Nov 27, 2023
Hailin Zhang, Penghao Zhao, Xupeng Miao, Yingxia Shao, Zirui Liu, Tong Yang, Bin Cui

Viaarxiv icon

Model-enhanced Vector Index

Add code
Bookmark button
Alert button
Sep 23, 2023
Hailin Zhang, Yujing Wang, Qi Chen, Ruiheng Chang, Ting Zhang, Ziming Miao, Yingyan Hou, Yang Ding, Xupeng Miao, Haonan Wang, Bochen Pang, Yuefeng Zhan, Hao Sun, Weiwei Deng, Qi Zhang, Fan Yang, Xing Xie, Mao Yang, Bin Cui

Figure 1 for Model-enhanced Vector Index
Figure 2 for Model-enhanced Vector Index
Figure 3 for Model-enhanced Vector Index
Figure 4 for Model-enhanced Vector Index
Viaarxiv icon

Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning

Add code
Bookmark button
Alert button
Jun 11, 2023
Hailin Zhang, Defang Chen, Can Wang

Figure 1 for Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Figure 2 for Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Figure 3 for Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Figure 4 for Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning
Viaarxiv icon

Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism

Add code
Bookmark button
Alert button
Nov 25, 2022
Xupeng Miao, Yujie Wang, Youhe Jiang, Chunan Shi, Xiaonan Nie, Hailin Zhang, Bin Cui

Figure 1 for Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism
Figure 2 for Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism
Figure 3 for Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism
Figure 4 for Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism
Viaarxiv icon

Knowledge Distillation with the Reused Teacher Classifier

Add code
Bookmark button
Alert button
Mar 26, 2022
Defang Chen, Jian-Ping Mei, Hailin Zhang, Can Wang, Yan Feng, Chun Chen

Figure 1 for Knowledge Distillation with the Reused Teacher Classifier
Figure 2 for Knowledge Distillation with the Reused Teacher Classifier
Figure 3 for Knowledge Distillation with the Reused Teacher Classifier
Figure 4 for Knowledge Distillation with the Reused Teacher Classifier
Viaarxiv icon

Confidence-Aware Multi-Teacher Knowledge Distillation

Add code
Bookmark button
Alert button
Dec 30, 2021
Hailin Zhang, Defang Chen, Can Wang

Figure 1 for Confidence-Aware Multi-Teacher Knowledge Distillation
Figure 2 for Confidence-Aware Multi-Teacher Knowledge Distillation
Figure 3 for Confidence-Aware Multi-Teacher Knowledge Distillation
Figure 4 for Confidence-Aware Multi-Teacher Knowledge Distillation
Viaarxiv icon

HET: Scaling out Huge Embedding Model Training via Cache-enabled Distributed Framework

Add code
Bookmark button
Alert button
Dec 14, 2021
Xupeng Miao, Hailin Zhang, Yining Shi, Xiaonan Nie, Zhi Yang, Yangyu Tao, Bin Cui

Figure 1 for HET: Scaling out Huge Embedding Model Training via Cache-enabled Distributed Framework
Figure 2 for HET: Scaling out Huge Embedding Model Training via Cache-enabled Distributed Framework
Figure 3 for HET: Scaling out Huge Embedding Model Training via Cache-enabled Distributed Framework
Figure 4 for HET: Scaling out Huge Embedding Model Training via Cache-enabled Distributed Framework
Viaarxiv icon