Alert button
Picture for Yawen Zhang

Yawen Zhang

Alert button

Evaluation and Mitigation of Agnosia in Multimodal Large Language Models

Sep 07, 2023
Jiaying Lu, Jinmeng Rao, Kezhen Chen, Xiaoyuan Guo, Yawen Zhang, Baochen Sun, Carl Yang, Jie Yang

While Multimodal Large Language Models (MLLMs) are widely used for a variety of vision-language tasks, one observation is that they sometimes misinterpret visual inputs or fail to follow textual instructions even in straightforward cases, leading to irrelevant responses, mistakes, and ungrounded claims. This observation is analogous to a phenomenon in neuropsychology known as Agnosia, an inability to correctly process sensory modalities and recognize things (e.g., objects, colors, relations). In our study, we adapt this similar concept to define "agnosia in MLLMs", and our goal is to comprehensively evaluate and mitigate such agnosia in MLLMs. Inspired by the diagnosis and treatment process in neuropsychology, we propose a novel framework EMMA (Evaluation and Mitigation of Multimodal Agnosia). In EMMA, we develop an evaluation module that automatically creates fine-grained and diverse visual question answering examples to assess the extent of agnosia in MLLMs comprehensively. We also develop a mitigation module to reduce agnosia in MLLMs through multimodal instruction tuning on fine-grained conversations. To verify the effectiveness of our framework, we evaluate and analyze agnosia in seven state-of-the-art MLLMs using 9K test samples. The results reveal that most of them exhibit agnosia across various aspects and degrees. We further develop a fine-grained instruction set and tune MLLMs to mitigate agnosia, which led to notable improvement in accuracy.

Viaarxiv icon

Tackling Vision Language Tasks Through Learning Inner Monologues

Aug 19, 2023
Diji Yang, Kezhen Chen, Jinmeng Rao, Xiaoyuan Guo, Yawen Zhang, Jie Yang, Yi Zhang

Visual language tasks require AI models to comprehend and reason with both visual and textual content. Driven by the power of Large Language Models (LLMs), two prominent methods have emerged: (1) the hybrid integration between LLMs and Vision-Language Models (VLMs), where visual inputs are firstly converted into language descriptions by VLMs, serving as inputs for LLMs to generate final answer(s); (2) visual feature alignment in language space, where visual inputs are encoded as embeddings and projected to LLMs' language space via further supervised fine-tuning. The first approach provides light training costs and interpretability but is hard to be optimized in an end-to-end fashion. The second approach presents decent performance, but feature alignment usually requires large amounts of training data and lacks interpretability. To tackle this dilemma, we propose a novel approach, Inner Monologue Multi-Modal Optimization (IMMO), to solve complex vision language problems by simulating inner monologue processes, a cognitive process in which an individual engages in silent verbal communication with themselves. We enable LLMs and VLMs to interact through natural language conversation and propose to use a two-stage training process to learn how to do the inner monologue (self-asking questions and answering questions). IMMO is evaluated on two popular tasks and the results suggest by emulating the cognitive phenomenon of internal dialogue, our approach can enhance reasoning and explanation abilities, contributing to the more effective fusion of vision and language models. More importantly, instead of using predefined human-crafted monologues, IMMO learns this process within the deep learning models, promising wider applicability to many different AI problems beyond vision language tasks.

Viaarxiv icon

LOWA: Localize Objects in the Wild with Attributes

May 31, 2023
Xiaoyuan Guo, Kezhen Chen, Jinmeng Rao, Yawen Zhang, Baochen Sun, Jie Yang

Figure 1 for LOWA: Localize Objects in the Wild with Attributes
Figure 2 for LOWA: Localize Objects in the Wild with Attributes
Figure 3 for LOWA: Localize Objects in the Wild with Attributes
Figure 4 for LOWA: Localize Objects in the Wild with Attributes

We present LOWA, a novel method for localizing objects with attributes effectively in the wild. It aims to address the insufficiency of current open-vocabulary object detectors, which are limited by the lack of instance-level attribute classification and rare class names. To train LOWA, we propose a hybrid vision-language training strategy to learn object detection and recognition with class names as well as attribute information. With LOWA, users can not only detect objects with class names, but also able to localize objects by attributes. LOWA is built on top of a two-tower vision-language architecture and consists of a standard vision transformer as the image encoder and a similar transformer as the text encoder. To learn the alignment between visual and text inputs at the instance level, we train LOWA with three training steps: object-level training, attribute-aware learning, and free-text joint training of objects and attributes. This hybrid training strategy first ensures correct object detection, then incorporates instance-level attribute information, and finally balances the object class and attribute sensitivity. We evaluate our model performance of attribute classification and attribute localization on the Open-Vocabulary Attribute Detection (OVAD) benchmark and the Visual Attributes in the Wild (VAW) dataset, and experiments indicate strong zero-shot performance. Ablation studies additionally demonstrate the effectiveness of each training step of our approach.

Viaarxiv icon

Air Pollution Hotspot Detection and Source Feature Analysis using Cross-domain Urban Data

Nov 15, 2022
Yawen Zhang, Michael Hannigan, Qin Lv

Figure 1 for Air Pollution Hotspot Detection and Source Feature Analysis using Cross-domain Urban Data
Figure 2 for Air Pollution Hotspot Detection and Source Feature Analysis using Cross-domain Urban Data
Figure 3 for Air Pollution Hotspot Detection and Source Feature Analysis using Cross-domain Urban Data
Figure 4 for Air Pollution Hotspot Detection and Source Feature Analysis using Cross-domain Urban Data

Air pollution is a major global environmental health threat, in particular for people who live or work near pollution sources. Areas adjacent to pollution sources often have high ambient pollution concentrations, and those areas are commonly referred to as air pollution hotspots. Detecting and characterizing pollution hotspots are of great importance for air quality management, but are challenging due to the high spatial and temporal variability of air pollutants. In this work, we explore the use of mobile sensing data (i.e., air quality sensors installed on vehicles) to detect pollution hotspots. One major challenge with mobile sensing data is uneven sampling, i.e., data collection can vary by both space and time. To address this challenge, we propose a two-step approach to detect hotspots from mobile sensing data, which includes local spike detection and sample-weighted clustering. Essentially, this approach tackles the uneven sampling issue by weighting samples based on their spatial frequency and temporal hit rate, so as to identify robust and persistent hotspots. To contextualize the hotspots and discover potential pollution source characteristics, we explore a variety of cross-domain urban data and extract features from them. As a soft-validation of the extracted features, we build hotspot inference models for cities with and without mobile sensing data. Evaluation results using real-world mobile sensing air quality data as well as cross-domain urban data demonstrate the effectiveness of our approach in detecting and inferring pollution hotspots. Furthermore, the empirical analysis of hotspots and source features yields useful insights regarding neighborhood pollution sources.

* ACM SIGSPATIAL 2021  
* 10 pages 
Viaarxiv icon

DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging

May 31, 2020
Qinggang Zhou, Yawen Zhang, Pengcheng Li, Xiaoyong Liu, Jun Yang, Runsheng Wang, Ru Huang

Figure 1 for DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging
Figure 2 for DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging
Figure 3 for DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging
Figure 4 for DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging

The state-of-the-art deep learning algorithms rely on distributed training systems to tackle the increasing sizes of models and training data sets. Minibatch stochastic gradient descent (SGD) algorithm requires workers to halt forward/back propagations, to wait for gradients aggregated from all workers, and to receive weight updates before the next batch of tasks. This synchronous execution model exposes the overheads of gradient/weight communication among a large number of workers in a distributed training system. We propose a new SGD algorithm, DaSGD (Local SGD with Delayed Averaging), which parallelizes SGD and forward/back propagations to hide 100% of the communication overhead. By adjusting the gradient update scheme, this algorithm uses hardware resources more efficiently and reduces the reliance on the low-latency and high-throughput inter-connects. The theoretical analysis and the experimental results show its convergence rate O(1/sqrt(K)), the same as SGD. The performance evaluation demonstrates it enables a linear performance scale-up with the cluster size.

Viaarxiv icon

Learn Electronic Health Records by Fully Decentralized Federated Learning

Dec 10, 2019
Songtao Lu, Yawen Zhang, Yunlong Wang, Christina Mack

Figure 1 for Learn Electronic Health Records by Fully Decentralized Federated Learning
Figure 2 for Learn Electronic Health Records by Fully Decentralized Federated Learning

Federated learning opens a number of research opportunities due to its high communication efficiency in distributed training problems within a star network. In this paper, we focus on improving the communication efficiency for fully decentralized federated learning over a graph, where the algorithm performs local updates for several iterations and then enables communications among the nodes. In such a way, the communication rounds of exchanging the common interest of parameters can be saved significantly without loss of optimality of the solutions. Multiple numerical simulations based on large, real-world electronic health record databases showcase the superiority of the decentralized federated learning compared with classic methods.

Viaarxiv icon

Applying High-Resolution Visible Imagery to Satellite Melt Pond Fraction Retrieval: A Neural Network Approach

Apr 13, 2017
Qi Liu, Yawen Zhang, Qin Lv, Li Shang

Figure 1 for Applying High-Resolution Visible Imagery to Satellite Melt Pond Fraction Retrieval: A Neural Network Approach
Figure 2 for Applying High-Resolution Visible Imagery to Satellite Melt Pond Fraction Retrieval: A Neural Network Approach
Figure 3 for Applying High-Resolution Visible Imagery to Satellite Melt Pond Fraction Retrieval: A Neural Network Approach
Figure 4 for Applying High-Resolution Visible Imagery to Satellite Melt Pond Fraction Retrieval: A Neural Network Approach

During summer, melt ponds have a significant influence on Arctic sea-ice albedo. The melt pond fraction (MPF) also has the ability to forecast the Arctic sea-ice in a certain period. It is important to retrieve accurate melt pond fraction (MPF) from satellite data for Arctic research. This paper proposes a satellite MPF retrieval model based on the multi-layer neural network, named MPF-NN. Our model uses multi-spectral satellite data as model input and MPF information from multi-site and multi-period visible imagery as prior knowledge for modeling. It can effectively model melt ponds evolution of different regions and periods over the Arctic. Evaluation results show that the MPF retrieved from MODIS data using the proposed model has an RMSE of 3.91% and a correlation coefficient of 0.73. The seasonal distribution of MPF is also consistent with previous results.

Viaarxiv icon