Picture for Xinpeng Wang

Xinpeng Wang

"Seeing the Big through the Small": Can LLMs Approximate Human Judgment Distributions on NLI from a Few Explanations?

Add code
Jun 25, 2024
Viaarxiv icon

The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models

Add code
Jun 16, 2024
Viaarxiv icon

FinerCut: Finer-grained Interpretable Layer Pruning for Large Language Models

Add code
May 28, 2024
Viaarxiv icon

Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think

Add code
Apr 12, 2024
Figure 1 for Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
Figure 2 for Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
Figure 3 for Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
Figure 4 for Look at the Text: Instruction-Tuned Language Models are More Robust Multiple Choice Selectors than You Think
Viaarxiv icon

On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models

Add code
Mar 07, 2024
Figure 1 for On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Figure 2 for On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Figure 3 for On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Figure 4 for On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Viaarxiv icon

"My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models

Add code
Feb 22, 2024
Figure 1 for "My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models
Figure 2 for "My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models
Figure 3 for "My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models
Figure 4 for "My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models
Viaarxiv icon

ToViLaG: Your Visual-Language Generative Model is Also An Evildoer

Add code
Dec 13, 2023
Viaarxiv icon

ACTOR: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation

Add code
Oct 23, 2023
Viaarxiv icon

Large-Scale and Multi-Perspective Opinion Summarization with Diverse Review Subsets

Add code
Oct 20, 2023
Figure 1 for Large-Scale and Multi-Perspective Opinion Summarization with Diverse Review Subsets
Figure 2 for Large-Scale and Multi-Perspective Opinion Summarization with Diverse Review Subsets
Figure 3 for Large-Scale and Multi-Perspective Opinion Summarization with Diverse Review Subsets
Figure 4 for Large-Scale and Multi-Perspective Opinion Summarization with Diverse Review Subsets
Viaarxiv icon

How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives

Add code
May 24, 2023
Figure 1 for How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives
Figure 2 for How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives
Figure 3 for How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives
Figure 4 for How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives
Viaarxiv icon