Picture for Ziwei Ji

Ziwei Ji

Towards Mitigating Hallucination in Large Language Models via Self-Reflection

Add code
Oct 10, 2023
Figure 1 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 2 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 3 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 4 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Viaarxiv icon

Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models

Add code
Oct 09, 2023
Viaarxiv icon

Think before you speak: Training Language Models With Pause Tokens

Add code
Oct 03, 2023
Viaarxiv icon

Improving Query-Focused Meeting Summarization with Query-Relevant Knowledge

Add code
Sep 05, 2023
Viaarxiv icon

Diverse and Faithful Knowledge-Grounded Dialogue Generation via Sequential Posterior Inference

Add code
Jun 01, 2023
Figure 1 for Diverse and Faithful Knowledge-Grounded Dialogue Generation via Sequential Posterior Inference
Figure 2 for Diverse and Faithful Knowledge-Grounded Dialogue Generation via Sequential Posterior Inference
Figure 3 for Diverse and Faithful Knowledge-Grounded Dialogue Generation via Sequential Posterior Inference
Figure 4 for Diverse and Faithful Knowledge-Grounded Dialogue Generation via Sequential Posterior Inference
Viaarxiv icon

Depth Dependence of $μ$P Learning Rates in ReLU MLPs

Add code
May 13, 2023
Viaarxiv icon

A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity

Add code
Feb 28, 2023
Figure 1 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 2 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 3 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 4 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Viaarxiv icon

NusaCrowd: Open Source Initiative for Indonesian NLP Resources

Add code
Dec 20, 2022
Viaarxiv icon

RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding

Add code
Dec 03, 2022
Figure 1 for RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding
Figure 2 for RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding
Figure 3 for RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding
Figure 4 for RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding
Viaarxiv icon

Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training

Add code
Oct 14, 2022
Figure 1 for Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training
Figure 2 for Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training
Figure 3 for Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training
Figure 4 for Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training
Viaarxiv icon