Alert button
Picture for Junbing Yan

Junbing Yan

Alert button

TRELM: Towards Robust and Efficient Pre-training for Knowledge-Enhanced Language Models

Add code
Bookmark button
Alert button
Mar 17, 2024
Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Longtao Huang, Hui Xue, Wei Zhang

Figure 1 for TRELM: Towards Robust and Efficient Pre-training for Knowledge-Enhanced Language Models
Figure 2 for TRELM: Towards Robust and Efficient Pre-training for Knowledge-Enhanced Language Models
Figure 3 for TRELM: Towards Robust and Efficient Pre-training for Knowledge-Enhanced Language Models
Figure 4 for TRELM: Towards Robust and Efficient Pre-training for Knowledge-Enhanced Language Models
Viaarxiv icon

Do Large Language Models Understand Logic or Just Mimick Context?

Add code
Bookmark button
Alert button
Feb 19, 2024
Junbing Yan, Chengyu Wang, Jun Huang, Wei Zhang

Viaarxiv icon

Towards Better Parameter-Efficient Fine-Tuning for Large Language Models: A Position Paper

Add code
Bookmark button
Alert button
Nov 22, 2023
Chengyu Wang, Junbing Yan, Wei Zhang, Jun Huang

Viaarxiv icon

From Complex to Simple: Unraveling the Cognitive Tree for Reasoning with Small Language Models

Add code
Bookmark button
Alert button
Nov 12, 2023
Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Wei Zhang

Viaarxiv icon

Making Small Language Models Better Multi-task Learners with Mixture-of-Task-Adapters

Add code
Bookmark button
Alert button
Sep 20, 2023
Yukang Xie, Chengyu Wang, Junbing Yan, Jiyong Zhou, Feiqi Deng, Jun Huang

Figure 1 for Making Small Language Models Better Multi-task Learners with Mixture-of-Task-Adapters
Figure 2 for Making Small Language Models Better Multi-task Learners with Mixture-of-Task-Adapters
Figure 3 for Making Small Language Models Better Multi-task Learners with Mixture-of-Task-Adapters
Figure 4 for Making Small Language Models Better Multi-task Learners with Mixture-of-Task-Adapters
Viaarxiv icon