Picture for Shuohuan Wang

Shuohuan Wang

Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging

Add code
Oct 02, 2024
Figure 1 for Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging
Figure 2 for Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging
Figure 3 for Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging
Figure 4 for Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging
Viaarxiv icon

NACL: A General and Effective KV Cache Eviction Framework for LLMs at Inference Time

Add code
Aug 07, 2024
Figure 1 for NACL: A General and Effective KV Cache Eviction Framework for LLMs at Inference Time
Figure 2 for NACL: A General and Effective KV Cache Eviction Framework for LLMs at Inference Time
Figure 3 for NACL: A General and Effective KV Cache Eviction Framework for LLMs at Inference Time
Figure 4 for NACL: A General and Effective KV Cache Eviction Framework for LLMs at Inference Time
Viaarxiv icon

DHA: Learning Decoupled-Head Attention from Transformer Checkpoints via Adaptive Heads Fusion

Add code
Jun 03, 2024
Figure 1 for DHA: Learning Decoupled-Head Attention from Transformer Checkpoints via Adaptive Heads Fusion
Figure 2 for DHA: Learning Decoupled-Head Attention from Transformer Checkpoints via Adaptive Heads Fusion
Figure 3 for DHA: Learning Decoupled-Head Attention from Transformer Checkpoints via Adaptive Heads Fusion
Figure 4 for DHA: Learning Decoupled-Head Attention from Transformer Checkpoints via Adaptive Heads Fusion
Viaarxiv icon

HFT: Half Fine-Tuning for Large Language Models

Add code
Apr 29, 2024
Figure 1 for HFT: Half Fine-Tuning for Large Language Models
Figure 2 for HFT: Half Fine-Tuning for Large Language Models
Figure 3 for HFT: Half Fine-Tuning for Large Language Models
Figure 4 for HFT: Half Fine-Tuning for Large Language Models
Viaarxiv icon

Dual Modalities of Text: Visual and Textual Generative Pre-training

Add code
Apr 17, 2024
Figure 1 for Dual Modalities of Text: Visual and Textual Generative Pre-training
Figure 2 for Dual Modalities of Text: Visual and Textual Generative Pre-training
Figure 3 for Dual Modalities of Text: Visual and Textual Generative Pre-training
Figure 4 for Dual Modalities of Text: Visual and Textual Generative Pre-training
Viaarxiv icon

On Training Data Influence of GPT Models

Add code
Apr 11, 2024
Figure 1 for On Training Data Influence of GPT Models
Figure 2 for On Training Data Influence of GPT Models
Figure 3 for On Training Data Influence of GPT Models
Figure 4 for On Training Data Influence of GPT Models
Viaarxiv icon

Tool-Augmented Reward Modeling

Add code
Oct 02, 2023
Figure 1 for Tool-Augmented Reward Modeling
Figure 2 for Tool-Augmented Reward Modeling
Figure 3 for Tool-Augmented Reward Modeling
Figure 4 for Tool-Augmented Reward Modeling
Viaarxiv icon

ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models

Add code
Feb 09, 2023
Viaarxiv icon

ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages

Add code
Dec 13, 2022
Figure 1 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Figure 2 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Figure 3 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Figure 4 for ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages
Viaarxiv icon

X-PuDu at SemEval-2022 Task 6: Multilingual Learning for English and Arabic Sarcasm Detection

Add code
Nov 30, 2022
Viaarxiv icon