Alert button
Picture for Junzhuo Li

Junzhuo Li

Alert button

The Fine Line: Navigating Large Language Model Pretraining with Down-streaming Capability Analysis

Add code
Bookmark button
Alert button
Apr 01, 2024
Chen Yang, Junzhuo Li, Xinyao Niu, Xinrun Du, Songyang Gao, Haoran Zhang, Zhaoliang Chen, Xingwei Qu, Ruibin Yuan, Yizhi Li, Jiaheng Liu, Stephen W. Huang, Shawn Yue, Wenhu Chen, Jie Fu, Ge Zhang

Viaarxiv icon

Language Representation Projection: Can We Transfer Factual Knowledge across Languages in Multilingual Language Models?

Add code
Bookmark button
Alert button
Nov 07, 2023
Shaoyang Xu, Junzhuo Li, Deyi Xiong

Viaarxiv icon

DEPN: Detecting and Editing Privacy Neurons in Pretrained Language Models

Add code
Bookmark button
Alert button
Oct 31, 2023
Xinwei Wu, Junzhuo Li, Minghui Xu, Weilong Dong, Shuangzhi Wu, Chao Bian, Deyi Xiong

Viaarxiv icon

FewFedWeight: Few-shot Federated Learning Framework across Multiple NLP Tasks

Add code
Bookmark button
Alert button
Dec 16, 2022
Weilong Dong, Xinwei Wu, Junzhuo Li, Shuangzhi Wu, Chao Bian, Deyi Xiong

Figure 1 for FewFedWeight: Few-shot Federated Learning Framework across Multiple NLP Tasks
Figure 2 for FewFedWeight: Few-shot Federated Learning Framework across Multiple NLP Tasks
Figure 3 for FewFedWeight: Few-shot Federated Learning Framework across Multiple NLP Tasks
Figure 4 for FewFedWeight: Few-shot Federated Learning Framework across Multiple NLP Tasks
Viaarxiv icon

Swing Distillation: A Privacy-Preserving Knowledge Distillation Framework

Add code
Bookmark button
Alert button
Dec 16, 2022
Junzhuo Li, Xinwei Wu, Weilong Dong, Shuangzhi Wu, Chao Bian, Deyi Xiong

Figure 1 for Swing Distillation: A Privacy-Preserving Knowledge Distillation Framework
Figure 2 for Swing Distillation: A Privacy-Preserving Knowledge Distillation Framework
Figure 3 for Swing Distillation: A Privacy-Preserving Knowledge Distillation Framework
Figure 4 for Swing Distillation: A Privacy-Preserving Knowledge Distillation Framework
Viaarxiv icon