Picture for Gal Yona

Gal Yona

Can Large Language Models Faithfully Express Their Intrinsic Uncertainty in Words?

Add code
May 27, 2024
Viaarxiv icon

Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?

Add code
May 09, 2024
Figure 1 for Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Figure 2 for Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Figure 3 for Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Figure 4 for Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Viaarxiv icon

Narrowing the Knowledge Evaluation Gap: Open-Domain Question Answering with Multi-Granularity Answers

Add code
Jan 09, 2024
Viaarxiv icon

Surfacing Biases in Large Language Models using Contrastive Input Decoding

Add code
May 12, 2023
Figure 1 for Surfacing Biases in Large Language Models using Contrastive Input Decoding
Figure 2 for Surfacing Biases in Large Language Models using Contrastive Input Decoding
Figure 3 for Surfacing Biases in Large Language Models using Contrastive Input Decoding
Figure 4 for Surfacing Biases in Large Language Models using Contrastive Input Decoding
Viaarxiv icon

Malign Overfitting: Interpolation Can Provably Preclude Invariance

Add code
Nov 28, 2022
Figure 1 for Malign Overfitting: Interpolation Can Provably Preclude Invariance
Figure 2 for Malign Overfitting: Interpolation Can Provably Preclude Invariance
Figure 3 for Malign Overfitting: Interpolation Can Provably Preclude Invariance
Figure 4 for Malign Overfitting: Interpolation Can Provably Preclude Invariance
Viaarxiv icon

Useful Confidence Measures: Beyond the Max Score

Add code
Oct 25, 2022
Figure 1 for Useful Confidence Measures: Beyond the Max Score
Figure 2 for Useful Confidence Measures: Beyond the Max Score
Figure 3 for Useful Confidence Measures: Beyond the Max Score
Figure 4 for Useful Confidence Measures: Beyond the Max Score
Viaarxiv icon

Active Learning with Label Comparisons

Add code
Apr 10, 2022
Figure 1 for Active Learning with Label Comparisons
Figure 2 for Active Learning with Label Comparisons
Figure 3 for Active Learning with Label Comparisons
Figure 4 for Active Learning with Label Comparisons
Viaarxiv icon

Decision-Making under Miscalibration

Add code
Mar 18, 2022
Figure 1 for Decision-Making under Miscalibration
Figure 2 for Decision-Making under Miscalibration
Figure 3 for Decision-Making under Miscalibration
Figure 4 for Decision-Making under Miscalibration
Viaarxiv icon

Revisiting Sanity Checks for Saliency Maps

Add code
Oct 27, 2021
Figure 1 for Revisiting Sanity Checks for Saliency Maps
Figure 2 for Revisiting Sanity Checks for Saliency Maps
Figure 3 for Revisiting Sanity Checks for Saliency Maps
Figure 4 for Revisiting Sanity Checks for Saliency Maps
Viaarxiv icon

Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via Disqualification

Add code
Oct 02, 2021
Figure 1 for Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via Disqualification
Figure 2 for Consider the Alternatives: Navigating Fairness-Accuracy Tradeoffs via Disqualification
Viaarxiv icon