Picture for Quoc V. Le

Quoc V. Le

Large Language Monkeys: Scaling Inference Compute with Repeated Sampling

Add code
Jul 31, 2024
Figure 1 for Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
Figure 2 for Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
Figure 3 for Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
Figure 4 for Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
Viaarxiv icon

NATURAL PLAN: Benchmarking LLMs on Natural Language Planning

Add code
Jun 06, 2024
Viaarxiv icon

Long-form factuality in large language models

Add code
Apr 03, 2024
Figure 1 for Long-form factuality in large language models
Figure 2 for Long-form factuality in large language models
Figure 3 for Long-form factuality in large language models
Figure 4 for Long-form factuality in large language models
Viaarxiv icon

Self-Discover: Large Language Models Self-Compose Reasoning Structures

Add code
Feb 06, 2024
Viaarxiv icon

AutoNumerics-Zero: Automated Discovery of State-of-the-Art Mathematical Functions

Add code
Dec 13, 2023
Figure 1 for AutoNumerics-Zero: Automated Discovery of State-of-the-Art Mathematical Functions
Figure 2 for AutoNumerics-Zero: Automated Discovery of State-of-the-Art Mathematical Functions
Figure 3 for AutoNumerics-Zero: Automated Discovery of State-of-the-Art Mathematical Functions
Figure 4 for AutoNumerics-Zero: Automated Discovery of State-of-the-Art Mathematical Functions
Viaarxiv icon

Large Language Models as Optimizers

Add code
Sep 07, 2023
Figure 1 for Large Language Models as Optimizers
Figure 2 for Large Language Models as Optimizers
Figure 3 for Large Language Models as Optimizers
Figure 4 for Large Language Models as Optimizers
Viaarxiv icon

Simple synthetic data reduces sycophancy in large language models

Add code
Aug 07, 2023
Figure 1 for Simple synthetic data reduces sycophancy in large language models
Figure 2 for Simple synthetic data reduces sycophancy in large language models
Figure 3 for Simple synthetic data reduces sycophancy in large language models
Figure 4 for Simple synthetic data reduces sycophancy in large language models
Viaarxiv icon

FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search

Add code
Aug 07, 2023
Figure 1 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Figure 2 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Figure 3 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Figure 4 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Viaarxiv icon

DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining

Add code
May 24, 2023
Figure 1 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Figure 2 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Figure 3 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Figure 4 for DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining
Viaarxiv icon

Symbol tuning improves in-context learning in language models

Add code
May 15, 2023
Figure 1 for Symbol tuning improves in-context learning in language models
Figure 2 for Symbol tuning improves in-context learning in language models
Figure 3 for Symbol tuning improves in-context learning in language models
Figure 4 for Symbol tuning improves in-context learning in language models
Viaarxiv icon