Picture for Yuning Mao

Yuning Mao

Jack

Llama 2: Open Foundation and Fine-Tuned Chat Models

Add code
Jul 19, 2023
Figure 1 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 2 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 3 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 4 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Viaarxiv icon

LIMA: Less Is More for Alignment

Add code
May 18, 2023
Figure 1 for LIMA: Less Is More for Alignment
Figure 2 for LIMA: Less Is More for Alignment
Figure 3 for LIMA: Less Is More for Alignment
Figure 4 for LIMA: Less Is More for Alignment
Viaarxiv icon

Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization

Add code
May 06, 2023
Figure 1 for Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Figure 2 for Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Figure 3 for Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Figure 4 for Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
Viaarxiv icon

Representation Deficiency in Masked Language Modeling

Add code
Feb 04, 2023
Figure 1 for Representation Deficiency in Masked Language Modeling
Figure 2 for Representation Deficiency in Masked Language Modeling
Figure 3 for Representation Deficiency in Masked Language Modeling
Figure 4 for Representation Deficiency in Masked Language Modeling
Viaarxiv icon

Progressive Prompts: Continual Learning for Language Models

Add code
Jan 29, 2023
Figure 1 for Progressive Prompts: Continual Learning for Language Models
Figure 2 for Progressive Prompts: Continual Learning for Language Models
Figure 3 for Progressive Prompts: Continual Learning for Language Models
Figure 4 for Progressive Prompts: Continual Learning for Language Models
Viaarxiv icon

XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models

Add code
Jan 25, 2023
Figure 1 for XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
Figure 2 for XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
Figure 3 for XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
Figure 4 for XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
Viaarxiv icon

Towards a Unified Multi-Dimensional Evaluator for Text Generation

Add code
Oct 13, 2022
Figure 1 for Towards a Unified Multi-Dimensional Evaluator for Text Generation
Figure 2 for Towards a Unified Multi-Dimensional Evaluator for Text Generation
Figure 3 for Towards a Unified Multi-Dimensional Evaluator for Text Generation
Figure 4 for Towards a Unified Multi-Dimensional Evaluator for Text Generation
Viaarxiv icon

CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation

Add code
May 12, 2022
Figure 1 for CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation
Figure 2 for CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation
Figure 3 for CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation
Figure 4 for CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation
Viaarxiv icon

Unsupervised Summarization with Customized Granularities

Add code
Jan 29, 2022
Figure 1 for Unsupervised Summarization with Customized Granularities
Figure 2 for Unsupervised Summarization with Customized Granularities
Figure 3 for Unsupervised Summarization with Customized Granularities
Figure 4 for Unsupervised Summarization with Customized Granularities
Viaarxiv icon

UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning

Add code
Oct 14, 2021
Figure 1 for UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Figure 2 for UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Figure 3 for UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Figure 4 for UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning
Viaarxiv icon