Picture for Etsuko Ishii

Etsuko Ishii

LLM Internal States Reveal Hallucination Risk Faced With a Query

Add code
Jul 03, 2024
Figure 1 for LLM Internal States Reveal Hallucination Risk Faced With a Query
Figure 2 for LLM Internal States Reveal Hallucination Risk Faced With a Query
Figure 3 for LLM Internal States Reveal Hallucination Risk Faced With a Query
Figure 4 for LLM Internal States Reveal Hallucination Risk Faced With a Query
Viaarxiv icon

Belief Revision: The Adaptability of Large Language Models Reasoning

Add code
Jun 28, 2024
Figure 1 for Belief Revision: The Adaptability of Large Language Models Reasoning
Figure 2 for Belief Revision: The Adaptability of Large Language Models Reasoning
Figure 3 for Belief Revision: The Adaptability of Large Language Models Reasoning
Figure 4 for Belief Revision: The Adaptability of Large Language Models Reasoning
Viaarxiv icon

The Pyramid of Captions

Add code
May 01, 2024
Figure 1 for The Pyramid of Captions
Figure 2 for The Pyramid of Captions
Figure 3 for The Pyramid of Captions
Figure 4 for The Pyramid of Captions
Viaarxiv icon

High-Dimension Human Value Representation in Large Language Models

Add code
Apr 11, 2024
Figure 1 for High-Dimension Human Value Representation in Large Language Models
Figure 2 for High-Dimension Human Value Representation in Large Language Models
Figure 3 for High-Dimension Human Value Representation in Large Language Models
Figure 4 for High-Dimension Human Value Representation in Large Language Models
Viaarxiv icon

Contrastive Learning for Inference in Dialogue

Add code
Oct 19, 2023
Figure 1 for Contrastive Learning for Inference in Dialogue
Figure 2 for Contrastive Learning for Inference in Dialogue
Figure 3 for Contrastive Learning for Inference in Dialogue
Figure 4 for Contrastive Learning for Inference in Dialogue
Viaarxiv icon

Towards Mitigating Hallucination in Large Language Models via Self-Reflection

Add code
Oct 10, 2023
Figure 1 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 2 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 3 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 4 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Viaarxiv icon

Can Question Rewriting Help Conversational Question Answering?

Add code
Apr 13, 2022
Figure 1 for Can Question Rewriting Help Conversational Question Answering?
Figure 2 for Can Question Rewriting Help Conversational Question Answering?
Figure 3 for Can Question Rewriting Help Conversational Question Answering?
Figure 4 for Can Question Rewriting Help Conversational Question Answering?
Viaarxiv icon

VScript: Controllable Script Generation with Audio-Visual Presentation

Add code
Mar 01, 2022
Figure 1 for VScript: Controllable Script Generation with Audio-Visual Presentation
Figure 2 for VScript: Controllable Script Generation with Audio-Visual Presentation
Figure 3 for VScript: Controllable Script Generation with Audio-Visual Presentation
Figure 4 for VScript: Controllable Script Generation with Audio-Visual Presentation
Viaarxiv icon

Survey of Hallucination in Natural Language Generation

Add code
Feb 08, 2022
Figure 1 for Survey of Hallucination in Natural Language Generation
Figure 2 for Survey of Hallucination in Natural Language Generation
Figure 3 for Survey of Hallucination in Natural Language Generation
Figure 4 for Survey of Hallucination in Natural Language Generation
Viaarxiv icon

Greenformer: Factorization Toolkit for Efficient Deep Neural Networks

Add code
Sep 14, 2021
Figure 1 for Greenformer: Factorization Toolkit for Efficient Deep Neural Networks
Figure 2 for Greenformer: Factorization Toolkit for Efficient Deep Neural Networks
Figure 3 for Greenformer: Factorization Toolkit for Efficient Deep Neural Networks
Viaarxiv icon