Picture for Shaoliang Nie

Shaoliang Nie

Jack

Pisces: An Auto-regressive Foundation Model for Image Understanding and Generation

Add code
Jun 12, 2025
Viaarxiv icon

Diversity-driven Data Selection for Language Model Tuning through Sparse Autoencoder

Add code
Feb 19, 2025
Viaarxiv icon

Unveiling User Satisfaction and Creator Productivity Trade-Offs in Recommendation Platforms

Add code
Oct 31, 2024
Figure 1 for Unveiling User Satisfaction and Creator Productivity Trade-Offs in Recommendation Platforms
Figure 2 for Unveiling User Satisfaction and Creator Productivity Trade-Offs in Recommendation Platforms
Figure 3 for Unveiling User Satisfaction and Creator Productivity Trade-Offs in Recommendation Platforms
Figure 4 for Unveiling User Satisfaction and Creator Productivity Trade-Offs in Recommendation Platforms
Viaarxiv icon

The Perfect Blend: Redefining RLHF with Mixture of Judges

Add code
Sep 30, 2024
Figure 1 for The Perfect Blend: Redefining RLHF with Mixture of Judges
Figure 2 for The Perfect Blend: Redefining RLHF with Mixture of Judges
Figure 3 for The Perfect Blend: Redefining RLHF with Mixture of Judges
Figure 4 for The Perfect Blend: Redefining RLHF with Mixture of Judges
Viaarxiv icon

The Llama 3 Herd of Models

Add code
Jul 31, 2024
Viaarxiv icon

On the Equivalence of Graph Convolution and Mixup

Add code
Sep 29, 2023
Figure 1 for On the Equivalence of Graph Convolution and Mixup
Figure 2 for On the Equivalence of Graph Convolution and Mixup
Figure 3 for On the Equivalence of Graph Convolution and Mixup
Figure 4 for On the Equivalence of Graph Convolution and Mixup
Viaarxiv icon

Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

Add code
May 11, 2023
Figure 1 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Figure 2 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Figure 3 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Figure 4 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Viaarxiv icon

AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning

Add code
Oct 12, 2022
Figure 1 for AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Figure 2 for AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Figure 3 for AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Figure 4 for AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Viaarxiv icon

FRAME: Evaluating Simulatability Metrics for Free-Text Rationales

Add code
Jul 02, 2022
Figure 1 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Figure 2 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Figure 3 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Figure 4 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Viaarxiv icon

ER-TEST: Evaluating Explanation Regularization Methods for NLP Models

Add code
May 25, 2022
Figure 1 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Figure 2 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Figure 3 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Figure 4 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Viaarxiv icon