Alert button
Picture for Zhen Yang

Zhen Yang

Alert button

TEAL: Tokenize and Embed ALL for Multi-modal Large Language Models

Nov 14, 2023
Zhen Yang, Yingxue Zhang, Fandong Meng, Jie Zhou

Despite Multi-modal Large Language Models (MM-LLMs) have made exciting strides recently, they are still struggling to efficiently model the interactions among multi-modal inputs and the generation in non-textual modalities. In this work, we propose TEAL (Tokenize and Embed ALl)}, an approach to treat the input from any modality as a token sequence and learn a joint embedding space for all modalities. Specifically, for the input from any modality, TEAL first discretizes it into a token sequence with the off-the-shelf tokenizer and embeds the token sequence into a joint embedding space with a learnable embedding matrix. MM-LLMs just need to predict the multi-modal tokens autoregressively as the textual LLMs do. Finally, the corresponding de-tokenizer is applied to generate the output in each modality based on the predicted token sequence. With the joint embedding space, TEAL enables the frozen LLMs to perform both understanding and generation tasks involving non-textual modalities, such as image and audio. Thus, the textual LLM can just work as an interface and maintain its high performance in textual understanding and generation. Experiments show that TEAL achieves substantial improvements in multi-modal understanding, and implements a simple scheme for multi-modal generations.

* Multi-modal, Large Language Models, Tokenizer, Understanding and Generation 
Viaarxiv icon

Object-aware Inversion and Reassembly for Image Editing

Oct 18, 2023
Zhen Yang, Dinggang Gui, Wen Wang, Hao Chen, Bohan Zhuang, Chunhua Shen

Figure 1 for Object-aware Inversion and Reassembly for Image Editing
Figure 2 for Object-aware Inversion and Reassembly for Image Editing
Figure 3 for Object-aware Inversion and Reassembly for Image Editing
Figure 4 for Object-aware Inversion and Reassembly for Image Editing

By comparing the original and target prompts in editing task, we can obtain numerous editing pairs, each comprising an object and its corresponding editing target. To allow editability while maintaining fidelity to the input image, existing editing methods typically involve a fixed number of inversion steps that project the whole input image to its noisier latent representation, followed by a denoising process guided by the target prompt. However, we find that the optimal number of inversion steps for achieving ideal editing results varies significantly among different editing pairs, owing to varying editing difficulties. Therefore, the current literature, which relies on a fixed number of inversion steps, produces sub-optimal generation quality, especially when handling multiple editing pairs in a natural image. To this end, we propose a new image editing paradigm, dubbed Object-aware Inversion and Reassembly (OIR), to enable object-level fine-grained editing. Specifically, we design a new search metric, which determines the optimal inversion steps for each editing pair, by jointly considering the editability of the target and the fidelity of the non-editing region. We use our search metric to find the optimal inversion step for each editing pair when editing an image. We then edit these editing pairs separately to avoid concept mismatch. Subsequently, we propose an additional reassembly step to seamlessly integrate the respective editing results and the non-editing region to obtain the final edited image. To systematically evaluate the effectiveness of our method, we collect two datasets for benchmarking single- and multi-object editing, respectively. Experiments demonstrate that our method achieves superior performance in editing object shapes, colors, materials, categories, etc., especially in multi-object editing scenarios.

* Project Page: https://aim-uofa.github.io/OIR-Diffusion/ 
Viaarxiv icon

XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners

Oct 09, 2023
Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Fang Guo, Qinglin Qi, Jie Zhou, Yue Zhang

Figure 1 for XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners
Figure 2 for XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners
Figure 3 for XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners
Figure 4 for XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners

Active learning aims to construct an effective training set by iteratively curating the most informative unlabeled data for annotation, which is practical in low-resource tasks. Most active learning techniques in classification rely on the model's uncertainty or disagreement to choose unlabeled data. However, previous work indicates that existing models are poor at quantifying predictive uncertainty, which can lead to over-confidence in superficial patterns and a lack of exploration. Inspired by the cognitive processes in which humans deduce and predict through causal information, we propose a novel Explainable Active Learning framework (XAL) for low-resource text classification, which aims to encourage classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations. Specifically, besides using a pre-trained bi-directional encoder for classification, we employ a pre-trained uni-directional decoder to generate and score the explanation. A ranking loss is proposed to enhance the decoder's capability in scoring explanations. During the selection of unlabeled data, we combine the predictive uncertainty of the encoder and the explanation score of the decoder to acquire informative data for annotation. As XAL is a general framework for text classification, we test our methods on six different classification tasks. Extensive experiments show that XAL achieves substantial improvement on all six tasks over previous AL methods. Ablation studies demonstrate the effectiveness of each component, and human evaluation shows that the model trained in XAL performs surprisingly well in explaining its prediction.

Viaarxiv icon

Enhancing Argument Structure Extraction with Efficient Leverage of Contextual Information

Oct 08, 2023
Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Jie Zhou, Yue Zhang

Figure 1 for Enhancing Argument Structure Extraction with Efficient Leverage of Contextual Information
Figure 2 for Enhancing Argument Structure Extraction with Efficient Leverage of Contextual Information
Figure 3 for Enhancing Argument Structure Extraction with Efficient Leverage of Contextual Information
Figure 4 for Enhancing Argument Structure Extraction with Efficient Leverage of Contextual Information

Argument structure extraction (ASE) aims to identify the discourse structure of arguments within documents. Previous research has demonstrated that contextual information is crucial for developing an effective ASE model. However, we observe that merely concatenating sentences in a contextual window does not fully utilize contextual information and can sometimes lead to excessive attention on less informative sentences. To tackle this challenge, we propose an Efficient Context-aware ASE model (ECASE) that fully exploits contextual information by enhancing modeling capacity and augmenting training data. Specifically, we introduce a sequence-attention module and distance-weighted similarity loss to aggregate contextual information and argumentative information. Additionally, we augment the training data by randomly masking discourse markers and sentences, which reduces the model's reliance on specific words or less informative sentences. Our experiments on five datasets from various domains demonstrate that our model achieves state-of-the-art performance. Furthermore, ablation studies confirm the effectiveness of each module in our model.

* EMNLP 2023 
Viaarxiv icon

GPT Can Solve Mathematical Problems Without a Calculator

Sep 12, 2023
Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang, Zehai He, Yuyi Guo, Jinfeng Bai, Jie Tang

Figure 1 for GPT Can Solve Mathematical Problems Without a Calculator
Figure 2 for GPT Can Solve Mathematical Problems Without a Calculator
Figure 3 for GPT Can Solve Mathematical Problems Without a Calculator
Figure 4 for GPT Can Solve Mathematical Problems Without a Calculator

Previous studies have typically assumed that large language models are unable to accurately perform arithmetic operations, particularly multiplication of >8 digits, and operations involving decimals and fractions, without the use of calculator tools. This paper aims to challenge this misconception. With sufficient training data, a 2 billion-parameter language model can accurately perform multi-digit arithmetic operations with almost 100% accuracy without data leakage, significantly surpassing GPT-4 (whose multi-digit multiplication accuracy is only 4.3%). We also demonstrate that our MathGLM, fine-tuned from GLM-10B on a dataset with additional multi-step arithmetic operations and math problems described in text, achieves similar performance to GPT-4 on a 5,000-samples Chinese math problem test set. Our code and data are public at https://github.com/THUDM/MathGLM.

* 26pages,14figures 
Viaarxiv icon

Deep Reinforcement Learning-driven Cross-Community Energy Interaction Optimal Scheduling

Sep 02, 2023
Yang Li, Wenjie Ma, Fanjin Bu, Zhen Yang, Bin Wang, Meng Han

Figure 1 for Deep Reinforcement Learning-driven Cross-Community Energy Interaction Optimal Scheduling
Figure 2 for Deep Reinforcement Learning-driven Cross-Community Energy Interaction Optimal Scheduling
Figure 3 for Deep Reinforcement Learning-driven Cross-Community Energy Interaction Optimal Scheduling
Figure 4 for Deep Reinforcement Learning-driven Cross-Community Energy Interaction Optimal Scheduling

In order to coordinate energy interactions among various communities and energy conversions among multi-energy subsystems within the multi-community integrated energy system under uncertain conditions, and achieve overall optimization and scheduling of the comprehensive energy system, this paper proposes a comprehensive scheduling model that utilizes a multi-agent deep reinforcement learning algorithm to learn load characteristics of different communities and make decisions based on this knowledge. In this model, the scheduling problem of the integrated energy system is transformed into a Markov decision process and solved using a data-driven deep reinforcement learning algorithm, which avoids the need for modeling complex energy coupling relationships between multi-communities and multi-energy subsystems. The simulation results show that the proposed method effectively captures the load characteristics of different communities and utilizes their complementary features to coordinate reasonable energy interactions among them. This leads to a reduction in wind curtailment rate from 16.3% to 0% and lowers the overall operating cost by 5445.6 Yuan, demonstrating significant economic and environmental benefits.

* in Chinese language, Accepted by Electric Power Construction 
Viaarxiv icon

ViLTA: Enhancing Vision-Language Pre-training through Textual Augmentation

Aug 31, 2023
Weihan Wang, Zhen Yang, Bin Xu, Juanzi Li, Yankui Sun

Figure 1 for ViLTA: Enhancing Vision-Language Pre-training through Textual Augmentation
Figure 2 for ViLTA: Enhancing Vision-Language Pre-training through Textual Augmentation
Figure 3 for ViLTA: Enhancing Vision-Language Pre-training through Textual Augmentation
Figure 4 for ViLTA: Enhancing Vision-Language Pre-training through Textual Augmentation

Vision-language pre-training (VLP) methods are blossoming recently, and its crucial goal is to jointly learn visual and textual features via a transformer-based architecture, demonstrating promising improvements on a variety of vision-language tasks. Prior arts usually focus on how to align visual and textual features, but strategies for improving the robustness of model and speeding up model convergence are left insufficiently explored. In this paper, we propose a novel method ViLTA, comprising of two components to further facilitate the model to learn fine-grained representations among image-text pairs. For Masked Language Modeling (MLM), we propose a cross-distillation method to generate soft labels to enhance the robustness of model, which alleviates the problem of treating synonyms of masked words as negative samples in one-hot labels. For Image-Text Matching (ITM), we leverage the current language encoder to synthesize hard negatives based on the context of language input, encouraging the model to learn high-quality representations by increasing the difficulty of the ITM task. By leveraging the above techniques, our ViLTA can achieve better performance on various vision-language tasks. Extensive experiments on benchmark datasets demonstrate that the effectiveness of ViLTA and its promising potential for vision-language pre-training.

* 15 pages, 5 figures 
Viaarxiv icon