Picture for Xinxi Lyu

Xinxi Lyu

Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research

Add code
Jan 31, 2024
Figure 1 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Figure 2 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Figure 3 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Figure 4 for Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Viaarxiv icon

FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation

Add code
May 23, 2023
Figure 1 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Figure 2 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Figure 3 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Figure 4 for FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation
Viaarxiv icon

Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations

Add code
Dec 19, 2022
Figure 1 for Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations
Figure 2 for Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations
Figure 3 for Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations
Figure 4 for Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations
Viaarxiv icon

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

Add code
Feb 25, 2022
Figure 1 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 2 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 3 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 4 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Viaarxiv icon