Alert button
Picture for Yiyang Zhou

Yiyang Zhou

Alert button

Holistic Analysis of Hallucination in GPT-4V(ision): Bias and Interference Challenges

Nov 07, 2023
Chenhang Cui, Yiyang Zhou, Xinyu Yang, Shirley Wu, Linjun Zhang, James Zou, Huaxiu Yao

While GPT-4V(ision) impressively models both visual and textual information simultaneously, it's hallucination behavior has not been systematically assessed. To bridge this gap, we introduce a new benchmark, namely, the Bias and Interference Challenges in Visual Language Models (Bingo). This benchmark is designed to evaluate and shed light on the two common types of hallucinations in visual language models: bias and interference. Here, bias refers to the model's tendency to hallucinate certain types of responses, possibly due to imbalance in its training data. Interference pertains to scenarios where the judgment of GPT-4V(ision) can be disrupted due to how the text prompt is phrased or how the input image is presented. We identify a notable regional bias, whereby GPT-4V(ision) is better at interpreting Western images or images with English writing compared to images from other countries or containing text in other languages. Moreover, GPT-4V(ision) is vulnerable to leading questions and is often confused when interpreting multiple images together. Popular mitigation approaches, such as self-correction and chain-of-thought reasoning, are not effective in resolving these challenges. We also identified similar biases and interference vulnerabilities with LLaVA and Bard. Our results characterize the hallucination challenges in GPT-4V(ision) and state-of-the-art visual-language models, and highlight the need for new solutions. The Bingo benchmark is available at https://github.com/gzcch/Bingo.

Viaarxiv icon

Analyzing and Mitigating Object Hallucination in Large Vision-Language Models

Oct 01, 2023
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao

Figure 1 for Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Figure 2 for Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Figure 3 for Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Figure 4 for Analyzing and Mitigating Object Hallucination in Large Vision-Language Models

Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact many vision-language tasks, such as visual summarization and reasoning. To address this issue, we propose a simple yet powerful algorithm, LVLM Hallucination Revisor (LURE), to post-hoc rectify object hallucination in LVLMs by reconstructing less hallucinatory descriptions. LURE is grounded in a rigorous statistical analysis of the key factors underlying object hallucination, including co-occurrence (the frequent appearance of certain objects alongside others in images), uncertainty (objects with higher uncertainty during LVLM decoding), and object position (hallucination often appears in the later part of the generated text). LURE can also be seamlessly integrated with any LVLMs. We evaluate LURE on six open-source LVLMs, achieving a 23% improvement in general object hallucination evaluation metrics over the previous best approach. In both GPT and human evaluations, LURE consistently ranks at the top. Our data and code are available at https://github.com/YiyangZhou/LURE.

Viaarxiv icon

Evaluation and Analysis of Hallucination in Large Vision-Language Models

Aug 29, 2023
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang

Figure 1 for Evaluation and Analysis of Hallucination in Large Vision-Language Models
Figure 2 for Evaluation and Analysis of Hallucination in Large Vision-Language Models
Figure 3 for Evaluation and Analysis of Hallucination in Large Vision-Language Models
Figure 4 for Evaluation and Analysis of Hallucination in Large Vision-Language Models

Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are still plagued by the hallucination problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs' responses that does not exist in the visual input, which poses potential risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibility, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mitigate the hallucination problem. Our training data and human annotation hallucination data will be made public soon.

* 11 pages, 5 figures 
Viaarxiv icon

Semantic-Human: Neural Rendering of Humans from Monocular Video with Human Parsing

Aug 19, 2023
Jie Zhang, Pengcheng Shi, Zaiwang Gu, Yiyang Zhou, Zhi Wang

Figure 1 for Semantic-Human: Neural Rendering of Humans from Monocular Video with Human Parsing
Figure 2 for Semantic-Human: Neural Rendering of Humans from Monocular Video with Human Parsing
Figure 3 for Semantic-Human: Neural Rendering of Humans from Monocular Video with Human Parsing
Figure 4 for Semantic-Human: Neural Rendering of Humans from Monocular Video with Human Parsing

The neural rendering of humans is a topic of great research significance. However, previous works mostly focus on achieving photorealistic details, neglecting the exploration of human parsing. Additionally, classical semantic work are all limited in their ability to efficiently represent fine results in complex motions. Human parsing is inherently related to radiance reconstruction, as similar appearance and geometry often correspond to similar semantic part. Furthermore, previous works often design a motion field that maps from the observation space to the canonical space, while it tends to exhibit either underfitting or overfitting, resulting in limited generalization. In this paper, we present Semantic-Human, a novel method that achieves both photorealistic details and viewpoint-consistent human parsing for the neural rendering of humans. Specifically, we extend neural radiance fields (NeRF) to jointly encode semantics, appearance and geometry to achieve accurate 2D semantic labels using noisy pseudo-label supervision. Leveraging the inherent consistency and smoothness properties of NeRF, Semantic-Human achieves consistent human parsing in both continuous and novel views. We also introduce constraints derived from the SMPL surface for the motion field and regularization for the recovered volumetric geometry. We have evaluated the model using the ZJU-MoCap dataset, and the obtained highly competitive results demonstrate the effectiveness of our proposed Semantic-Human. We also showcase various compelling applications, including label denoising, label synthesis and image editing, and empirically validate its advantageous properties.

Viaarxiv icon

Overlap Bias Matching is Necessary for Point Cloud Registration

Aug 18, 2023
Pengcheng Shi, Jie Zhang, Haozhe Cheng, Junyang Wang, Yiyang Zhou, Chenlin Zhao, Jihua Zhu

Figure 1 for Overlap Bias Matching is Necessary for Point Cloud Registration
Figure 2 for Overlap Bias Matching is Necessary for Point Cloud Registration
Figure 3 for Overlap Bias Matching is Necessary for Point Cloud Registration
Figure 4 for Overlap Bias Matching is Necessary for Point Cloud Registration

Point cloud registration is a fundamental problem in many domains. Practically, the overlap between point clouds to be registered may be relatively small. Most unsupervised methods lack effective initial evaluation of overlap, leading to suboptimal registration accuracy. To address this issue, we propose an unsupervised network Overlap Bias Matching Network (OBMNet) for partial point cloud registration. Specifically, we propose a plug-and-play Overlap Bias Matching Module (OBMM) comprising two integral components, overlap sampling module and bias prediction module. These two components are utilized to capture the distribution of overlapping regions and predict bias coefficients of point cloud common structures, respectively. Then, we integrate OBMM with the neighbor map matching module to robustly identify correspondences by precisely merging matching scores of points within the neighborhood, which addresses the ambiguities in single-point features. OBMNet can maintain efficacy even in pair-wise registration scenarios with low overlap ratios. Experimental results on extensive datasets demonstrate that our approach's performance achieves a significant improvement compared to the state-of-the-art registration approach.

* arXiv admin note: text overlap with arXiv:2202.11292 by other authors 
Viaarxiv icon

Contrastive Label Enhancement

May 16, 2023
Yifei Wang, Yiyang Zhou, Jihua Zhu, Xinyuan Liu, Wenbiao Yan, Zhiqiang Tian

Figure 1 for Contrastive Label Enhancement
Figure 2 for Contrastive Label Enhancement
Figure 3 for Contrastive Label Enhancement
Figure 4 for Contrastive Label Enhancement

Label distribution learning (LDL) is a new machine learning paradigm for solving label ambiguity. Since it is difficult to directly obtain label distributions, many studies are focusing on how to recover label distributions from logical labels, dubbed label enhancement (LE). Existing LE methods estimate label distributions by simply building a mapping relationship between features and label distributions under the supervision of logical labels. They typically overlook the fact that both features and logical labels are descriptions of the instance from different views. Therefore, we propose a novel method called Contrastive Label Enhancement (ConLE) which integrates features and logical labels into the unified projection space to generate high-level features by contrastive learning strategy. In this approach, features and logical labels belonging to the same sample are pulled closer, while those of different samples are projected farther away from each other in the projection space. Subsequently, we leverage the obtained high-level features to gain label distributions through a welldesigned training strategy that considers the consistency of label attributes. Extensive experiments on LDL benchmark datasets demonstrate the effectiveness and superiority of our method.

* 9 pages, 4 figures, published to IJCAI2023 
Viaarxiv icon

DualGenerator: Information Interaction-based Generative Network for Point Cloud Completion

May 16, 2023
Pengcheng Shi, Haozhe Cheng, Xu Han, Yiyang Zhou, Jihua Zhu

Figure 1 for DualGenerator: Information Interaction-based Generative Network for Point Cloud Completion
Figure 2 for DualGenerator: Information Interaction-based Generative Network for Point Cloud Completion
Figure 3 for DualGenerator: Information Interaction-based Generative Network for Point Cloud Completion
Figure 4 for DualGenerator: Information Interaction-based Generative Network for Point Cloud Completion

Point cloud completion estimates complete shapes from incomplete point clouds to obtain higher-quality point cloud data. Most existing methods only consider global object features, ignoring spatial and semantic information of adjacent points. They cannot distinguish structural information well between different object parts, and the robustness of models is poor. To tackle these challenges, we propose an information interaction-based generative network for point cloud completion ($\mathbf{DualGenerator}$). It contains an adversarial generation path and a variational generation path, which interact with each other and share weights. DualGenerator introduces a local refinement module in generation paths, which captures general structures from partial inputs, and then refines shape details of the point cloud. It promotes completion in the unknown region and makes a distinction between different parts more obvious. Moreover, we design DGStyleGAN to improve the generation quality further. It promotes the robustness of this network combined with fusion analysis of dual-path completion results. Qualitative and quantitative evaluations demonstrate that our method is superior on MVP and Completion3D datasets. The performance will not degrade significantly after adding noise interference or sparse sampling.

Viaarxiv icon

mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality

Apr 27, 2023
Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chenliang Li, Yuanhong Xu, Hehong Chen, Junfeng Tian, Qian Qi, Ji Zhang, Fei Huang

Figure 1 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Figure 2 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Figure 3 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Figure 4 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality

Large language models (LLMs) have demonstrated impressive zero-shot abilities on a variety of open-ended tasks, while recent research has also explored the use of LLMs for multi-modal generation. In this study, we introduce mPLUG-Owl, a novel training paradigm that equips LLMs with multi-modal abilities through modularized learning of foundation LLM, a visual knowledge module, and a visual abstractor module. This approach can support multiple modalities and facilitate diverse unimodal and multimodal abilities through modality collaboration. The training paradigm of mPLUG-Owl involves a two-stage method for aligning image and text, which learns visual knowledge with the assistance of LLM while maintaining and even improving the generation abilities of LLM. In the first stage, the visual knowledge module and abstractor module are trained with a frozen LLM module to align the image and text. In the second stage, language-only and multi-modal supervised datasets are used to jointly fine-tune a low-rank adaption (LoRA) module on LLM and the abstractor module by freezing the visual knowledge module. We carefully build a visually-related instruction evaluation set OwlEval. Experimental results show that our model outperforms existing multi-modal models, demonstrating mPLUG-Owl's impressive instruction and visual understanding ability, multi-turn conversation ability, and knowledge reasoning ability. Besides, we observe some unexpected and exciting abilities such as multi-image correlation and scene text understanding, which makes it possible to leverage it for harder real scenarios, such as vision-only document comprehension. Our code, pre-trained model, instruction-tuned models, and evaluation set are available at https://github.com/X-PLUG/mPLUG-Owl. The online demo is available at https://www.modelscope.cn/studios/damo/mPLUG-Owl.

* Working in Process 
Viaarxiv icon

Semantically Consistent Multi-view Representation Learning

Mar 08, 2023
Yiyang Zhou, Qinghai Zheng, Shunshun Bai, Jihua Zhu

Figure 1 for Semantically Consistent Multi-view Representation Learning
Figure 2 for Semantically Consistent Multi-view Representation Learning
Figure 3 for Semantically Consistent Multi-view Representation Learning
Figure 4 for Semantically Consistent Multi-view Representation Learning

In this work, we devote ourselves to the challenging task of Unsupervised Multi-view Representation Learning (UMRL), which requires learning a unified feature representation from multiple views in an unsupervised manner. Existing UMRL methods mainly concentrate on the learning process in the feature space while ignoring the valuable semantic information hidden in different views. To address this issue, we propose a novel Semantically Consistent Multi-view Representation Learning (SCMRL), which makes efforts to excavate underlying multi-view semantic consensus information and utilize the information to guide the unified feature representation learning. Specifically, SCMRL consists of a within-view reconstruction module and a unified feature representation learning module, which are elegantly integrated by the contrastive learning strategy to simultaneously align semantic labels of both view-specific feature representations and the learned unified feature representation. In this way, the consensus information in the semantic space can be effectively exploited to constrain the learning process of unified feature representation. Compared with several state-of-the-art algorithms, extensive experiments demonstrate its superiority.

* 19 pages, 4figures 
Viaarxiv icon

Multi-view Semantic Consistency based Information Bottleneck for Clustering

Feb 28, 2023
Wenbiao Yan, Jihua Zhu, Yiyang Zhou, Yifei Wang, Qinghai Zheng

Figure 1 for Multi-view Semantic Consistency based Information Bottleneck for Clustering
Figure 2 for Multi-view Semantic Consistency based Information Bottleneck for Clustering
Figure 3 for Multi-view Semantic Consistency based Information Bottleneck for Clustering
Figure 4 for Multi-view Semantic Consistency based Information Bottleneck for Clustering

Multi-view clustering can make use of multi-source information for unsupervised clustering. Most existing methods focus on learning a fused representation matrix, while ignoring the influence of private information and noise. To address this limitation, we introduce a novel Multi-view Semantic Consistency based Information Bottleneck for clustering (MSCIB). Specifically, MSCIB pursues semantic consistency to improve the learning process of information bottleneck for different views. It conducts the alignment operation of multiple views in the semantic space and jointly achieves the valuable consistent information of multi-view data. In this way, the learned semantic consistency from multi-view data can improve the information bottleneck to more exactly distinguish the consistent information and learn a unified feature representation with more discriminative consistent information for clustering. Experiments on various types of multi-view datasets show that MSCIB achieves state-of-the-art performance.

* 9 pages, 4 figures, conference 
Viaarxiv icon