Picture for Chengrun Yang

Chengrun Yang

Long-form factuality in large language models

Add code
Apr 03, 2024
Figure 1 for Long-form factuality in large language models
Figure 2 for Long-form factuality in large language models
Figure 3 for Long-form factuality in large language models
Figure 4 for Long-form factuality in large language models
Viaarxiv icon

Large Language Models as Optimizers

Add code
Sep 07, 2023
Figure 1 for Large Language Models as Optimizers
Figure 2 for Large Language Models as Optimizers
Figure 3 for Large Language Models as Optimizers
Figure 4 for Large Language Models as Optimizers
Viaarxiv icon

Resource-Constrained Neural Architecture Search on Tabular Datasets

Add code
Apr 15, 2022
Figure 1 for Resource-Constrained Neural Architecture Search on Tabular Datasets
Figure 2 for Resource-Constrained Neural Architecture Search on Tabular Datasets
Figure 3 for Resource-Constrained Neural Architecture Search on Tabular Datasets
Figure 4 for Resource-Constrained Neural Architecture Search on Tabular Datasets
Viaarxiv icon

How Low Can We Go: Trading Memory for Error in Low-Precision Training

Add code
Jun 18, 2021
Figure 1 for How Low Can We Go: Trading Memory for Error in Low-Precision Training
Figure 2 for How Low Can We Go: Trading Memory for Error in Low-Precision Training
Figure 3 for How Low Can We Go: Trading Memory for Error in Low-Precision Training
Figure 4 for How Low Can We Go: Trading Memory for Error in Low-Precision Training
Viaarxiv icon

TenIPS: Inverse Propensity Sampling for Tensor Completion

Add code
Jan 01, 2021
Figure 1 for TenIPS: Inverse Propensity Sampling for Tensor Completion
Figure 2 for TenIPS: Inverse Propensity Sampling for Tensor Completion
Figure 3 for TenIPS: Inverse Propensity Sampling for Tensor Completion
Figure 4 for TenIPS: Inverse Propensity Sampling for Tensor Completion
Viaarxiv icon

Low-Rank Tensor Recovery with Euclidean-Norm-Induced Schatten-p Quasi-Norm Regularization

Add code
Dec 07, 2020
Figure 1 for Low-Rank Tensor Recovery with Euclidean-Norm-Induced Schatten-p Quasi-Norm Regularization
Figure 2 for Low-Rank Tensor Recovery with Euclidean-Norm-Induced Schatten-p Quasi-Norm Regularization
Figure 3 for Low-Rank Tensor Recovery with Euclidean-Norm-Induced Schatten-p Quasi-Norm Regularization
Figure 4 for Low-Rank Tensor Recovery with Euclidean-Norm-Induced Schatten-p Quasi-Norm Regularization
Viaarxiv icon

Efficient AutoML Pipeline Search with Matrix and Tensor Factorization

Add code
Jun 07, 2020
Figure 1 for Efficient AutoML Pipeline Search with Matrix and Tensor Factorization
Figure 2 for Efficient AutoML Pipeline Search with Matrix and Tensor Factorization
Figure 3 for Efficient AutoML Pipeline Search with Matrix and Tensor Factorization
Figure 4 for Efficient AutoML Pipeline Search with Matrix and Tensor Factorization
Viaarxiv icon

Robust Non-Linear Matrix Factorization for Dictionary Learning, Denoising, and Clustering

Add code
May 04, 2020
Figure 1 for Robust Non-Linear Matrix Factorization for Dictionary Learning, Denoising, and Clustering
Figure 2 for Robust Non-Linear Matrix Factorization for Dictionary Learning, Denoising, and Clustering
Figure 3 for Robust Non-Linear Matrix Factorization for Dictionary Learning, Denoising, and Clustering
Figure 4 for Robust Non-Linear Matrix Factorization for Dictionary Learning, Denoising, and Clustering
Viaarxiv icon

OBOE: Collaborative Filtering for AutoML Initialization

Add code
Aug 09, 2018
Figure 1 for OBOE: Collaborative Filtering for AutoML Initialization
Figure 2 for OBOE: Collaborative Filtering for AutoML Initialization
Figure 3 for OBOE: Collaborative Filtering for AutoML Initialization
Figure 4 for OBOE: Collaborative Filtering for AutoML Initialization
Viaarxiv icon