Alert button
Picture for Hongyin Luo

Hongyin Luo

Alert button

HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding

Mar 01, 2024
Zhaorun Chen, Zhuokai Zhao, Hongyin Luo, Huaxiu Yao, Bo Li, Jiawei Zhou

Figure 1 for HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding
Figure 2 for HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding
Figure 3 for HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding
Figure 4 for HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding
Viaarxiv icon

Joint Audio and Speech Understanding

Oct 02, 2023
Yuan Gong, Alexander H. Liu, Hongyin Luo, Leonid Karlinsky, James Glass

Viaarxiv icon

Self-Specialization: Uncovering Latent Expertise within Large Language Models

Sep 29, 2023
Junmo Kang, Hongyin Luo, Yada Zhu, James Glass, David Cox, Alan Ritter, Rogerio Feris, Leonid Karlinsky

Figure 1 for Self-Specialization: Uncovering Latent Expertise within Large Language Models
Figure 2 for Self-Specialization: Uncovering Latent Expertise within Large Language Models
Figure 3 for Self-Specialization: Uncovering Latent Expertise within Large Language Models
Figure 4 for Self-Specialization: Uncovering Latent Expertise within Large Language Models
Viaarxiv icon

Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning

Sep 19, 2023
Tianhua Zhang, Jiaxin Ge, Hongyin Luo, Yung-Sung Chuang, Mingye Gao, Yuan Gong, Xixin Wu, Yoon Kim, Helen Meng, James Glass

Figure 1 for Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
Figure 2 for Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
Figure 3 for Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
Figure 4 for Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
Viaarxiv icon

DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models

Sep 07, 2023
Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, Pengcheng He

Figure 1 for DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
Figure 2 for DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
Figure 3 for DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
Figure 4 for DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
Viaarxiv icon

Entailment as Robust Self-Learner

May 26, 2023
Jiaxin Ge, Hongyin Luo, Yoon Kim, James Glass

Figure 1 for Entailment as Robust Self-Learner
Figure 2 for Entailment as Robust Self-Learner
Figure 3 for Entailment as Robust Self-Learner
Figure 4 for Entailment as Robust Self-Learner
Viaarxiv icon

SAIL: Search-Augmented Instruction Learning

May 24, 2023
Hongyin Luo, Yung-Sung Chuang, Yuan Gong, Tianhua Zhang, Yoon Kim, Xixin Wu, Danny Fox, Helen Meng, James Glass

Figure 1 for SAIL: Search-Augmented Instruction Learning
Figure 2 for SAIL: Search-Augmented Instruction Learning
Figure 3 for SAIL: Search-Augmented Instruction Learning
Figure 4 for SAIL: Search-Augmented Instruction Learning
Viaarxiv icon

Listen, Think, and Understand

May 18, 2023
Yuan Gong, Hongyin Luo, Alexander H. Liu, Leonid Karlinsky, James Glass

Figure 1 for Listen, Think, and Understand
Figure 2 for Listen, Think, and Understand
Figure 3 for Listen, Think, and Understand
Figure 4 for Listen, Think, and Understand
Viaarxiv icon

Chain of Thought Prompt Tuning in Vision Language Models

Apr 16, 2023
Jiaxin Ge, Hongyin Luo, Siyuan Qian, Yulu Gan, Jie Fu, Shanghang Zhan

Figure 1 for Chain of Thought Prompt Tuning in Vision Language Models
Figure 2 for Chain of Thought Prompt Tuning in Vision Language Models
Figure 3 for Chain of Thought Prompt Tuning in Vision Language Models
Figure 4 for Chain of Thought Prompt Tuning in Vision Language Models
Viaarxiv icon

Interpretable Unified Language Checking

Apr 07, 2023
Tianhua Zhang, Hongyin Luo, Yung-Sung Chuang, Wei Fang, Luc Gaitskell, Thomas Hartvigsen, Xixin Wu, Danny Fox, Helen Meng, James Glass

Figure 1 for Interpretable Unified Language Checking
Figure 2 for Interpretable Unified Language Checking
Figure 3 for Interpretable Unified Language Checking
Figure 4 for Interpretable Unified Language Checking
Viaarxiv icon