Picture for Sree Harsha Tanneru

Sree Harsha Tanneru

On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models

Add code
Jun 15, 2024
Figure 1 for On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
Figure 2 for On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
Figure 3 for On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
Figure 4 for On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
Viaarxiv icon

Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models

Add code
Feb 08, 2024
Figure 1 for Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models
Figure 2 for Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models
Figure 3 for Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models
Figure 4 for Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models
Viaarxiv icon

Quantifying Uncertainty in Natural Language Explanations of Large Language Models

Add code
Nov 06, 2023
Figure 1 for Quantifying Uncertainty in Natural Language Explanations of Large Language Models
Figure 2 for Quantifying Uncertainty in Natural Language Explanations of Large Language Models
Figure 3 for Quantifying Uncertainty in Natural Language Explanations of Large Language Models
Figure 4 for Quantifying Uncertainty in Natural Language Explanations of Large Language Models
Viaarxiv icon

Word-Level Explanations for Analyzing Bias in Text-to-Image Models

Add code
Jun 03, 2023
Viaarxiv icon