Alert button
Picture for Zhiyuan Zhao

Zhiyuan Zhao

Alert button

ART$\boldsymbol{\cdot}$V: Auto-Regressive Text-to-Video Generation with Diffusion Models

Nov 30, 2023
Wenming Weng, Ruoyu Feng, Yanhui Wang, Qi Dai, Chunyu Wang, Dacheng Yin, Zhiyuan Zhao, Kai Qiu, Jianmin Bao, Yuhui Yuan, Chong Luo, Yueyi Zhang, Zhiwei Xiong

We present ART$\boldsymbol{\cdot}$V, an efficient framework for auto-regressive video generation with diffusion models. Unlike existing methods that generate entire videos in one-shot, ART$\boldsymbol{\cdot}$V generates a single frame at a time, conditioned on the previous ones. The framework offers three distinct advantages. First, it only learns simple continual motions between adjacent frames, therefore avoiding modeling complex long-range motions that require huge training data. Second, it preserves the high-fidelity generation ability of the pre-trained image diffusion models by making only minimal network modifications. Third, it can generate arbitrarily long videos conditioned on a variety of prompts such as text, image or their combinations, making it highly versatile and flexible. To combat the common drifting issue in AR models, we propose masked diffusion model which implicitly learns which information can be drawn from reference images rather than network predictions, in order to reduce the risk of generating inconsistent appearances that cause drifting. Moreover, we further enhance generation coherence by conditioning it on the initial frame, which typically contains minimal noise. This is particularly useful for long video generation. When trained for only two weeks on four GPUs, ART$\boldsymbol{\cdot}$V already can generate videos with natural motions, rich details and a high level of aesthetic quality. Besides, it enables various appealing applications, e.g., composing a long video from multiple text prompts.

* 24 pages, 21 figures. Project page at https://warranweng.github.io/art.v 
Viaarxiv icon

Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization

Nov 28, 2023
Zhiyuan Zhao, Bin Wang, Linke Ouyang, Xiaoyi Dong, Jiaqi Wang, Conghui He

Multimodal large language models have made significant advancements in recent years, yet they still suffer from a common issue known as the "hallucination problem" where the models generate textual descriptions that contain inaccurate or non-existent content from the image. To address this issue, this paper introduces a novel strategy: Hallucination-Aware Direct Preference Optimization (HA-DPO). Our approach treats the hallucination problem as a unique preference selection issue, where the model is trained to favor the non-hallucinating response when presented with two responses of the same image (one accurate and one hallucinating). This paper also presents an efficient process for constructing hallucination sample pairs to ensure high-quality, style-consistent pairs for stable HA-DPO training. We applied this strategy to two mainstream multimodal models, and the results showed a significant reduction in the hallucination problem and an enhancement in the models' generalization capabilities. With HA-DPO, the MiniGPT-4 model demonstrates significant advancements: POPE accuracy increases from 51.13% to 85.66% (34.5% absolute improvement), and the MME score escalates from 968.58 to 1365.76 (41% relative improvement). The code, models, and datasets will be made publicly available.

* Preprint 
Viaarxiv icon

Performative Time-Series Forecasting

Oct 09, 2023
Zhiyuan Zhao, Alexander Rodriguez, B. Aditya Prakash

Figure 1 for Performative Time-Series Forecasting
Figure 2 for Performative Time-Series Forecasting
Figure 3 for Performative Time-Series Forecasting
Figure 4 for Performative Time-Series Forecasting

Time-series forecasting is a critical challenge in various domains and has witnessed substantial progress in recent years. Many real-life scenarios, such as public health, economics, and social applications, involve feedback loops where predictions can influence the predicted outcome, subsequently altering the target variable's distribution. This phenomenon, known as performativity, introduces the potential for 'self-negating' or 'self-fulfilling' predictions. Despite extensive studies in classification problems across domains, performativity remains largely unexplored in the context of time-series forecasting from a machine-learning perspective. In this paper, we formalize performative time-series forecasting (PeTS), addressing the challenge of accurate predictions when performativity-induced distribution shifts are possible. We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts and subsequently predicts targets accordingly. We provide theoretical insights suggesting that FPS can potentially lead to reduced generalization error. We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks. The results demonstrate that FPS consistently outperforms conventional time-series forecasting methods, highlighting its efficacy in handling performativity-induced challenges.

* 12 pages (7 main text, 2 reference, 3 appendix), 3 figures, 4 tables 
Viaarxiv icon

InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition

Sep 29, 2023
Pan Zhang, Xiaoyi Dong, Bin Wang, Yuhang Cao, Chao Xu, Linke Ouyang, Zhiyuan Zhao, Shuangrui Ding, Songyang Zhang, Haodong Duan, Hang Yan, Xinyue Zhang, Wei Li, Jingwen Li, Kai Chen, Conghui He, Xingcheng Zhang, Yu Qiao, Dahua Lin, Jiaqi Wang

Figure 1 for InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition
Figure 2 for InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition
Figure 3 for InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition
Figure 4 for InternLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition

We propose InternLM-XComposer, a vision-language large model that enables advanced image-text comprehension and composition. The innovative nature of our model is highlighted by three appealing properties: 1) Interleaved Text-Image Composition: InternLM-XComposer can effortlessly generate coherent and contextual articles that seamlessly integrate images, providing a more engaging and immersive reading experience. Simply provide a title, and our system will generate the corresponding manuscript. It can intelligently identify the areas in the text where images would enhance the content and automatically insert the most appropriate visual candidates. 2) Comprehension with Rich Multilingual Knowledge: The text-image comprehension is empowered by training on extensive multi-modal multilingual concepts with carefully crafted strategies, resulting in a deep understanding of visual content. 3) State-of-the-art Performance: Our model consistently achieves state-of-the-art results across various mainstream benchmarks for vision-language foundational models, including MME Benchmark, MMBench, MMBench-CN, Seed-Bench, and CCBench (Chinese Cultural Benchmark). Collectively, InternLM-XComposer seamlessly blends advanced text-image comprehension and composition, revolutionizing vision-language interaction and offering new insights and opportunities. The InternLM-XComposer model series with 7B parameters are publicly available at https://github.com/InternLM/InternLM-XComposer.

* Code is available at https://github.com/InternLM/InternLM-XComposer 
Viaarxiv icon

MLLM-DataEngine: An Iterative Refinement Approach for MLLM

Sep 11, 2023
Zhiyuan Zhao, Linke Ouyang, Bin Wang, Siyuan Huang, Pan Zhang, Xiaoyi Dong, Jiaqi Wang, Conghui He

Figure 1 for MLLM-DataEngine: An Iterative Refinement Approach for MLLM
Figure 2 for MLLM-DataEngine: An Iterative Refinement Approach for MLLM
Figure 3 for MLLM-DataEngine: An Iterative Refinement Approach for MLLM
Figure 4 for MLLM-DataEngine: An Iterative Refinement Approach for MLLM

Despite the great advance of Multimodal Large Language Models (MLLMs) in both instruction dataset building and benchmarking, the independence of training and evaluation makes current MLLMs hard to further improve their capability under the guidance of evaluation results with a relatively low human cost. In this paper, we propose MLLM-DataEngine, a novel closed-loop system that bridges data generation, model training, and evaluation. Within each loop iteration, the MLLM-DataEngine first analyze the weakness of the model based on the evaluation results, then generate a proper incremental dataset for the next training iteration and enhance the model capability iteratively. Compared with previous data collection methods which are separate from the benchmarking, the data generated by MLLM-DataEngine shows better targeting, quality, and correctness. For targeting, we propose an Adaptive Bad-case Sampling module, which adjusts the ratio of different types of data within each incremental dataset based on the benchmarking results. For quality, we resort to GPT-4 to generate high-quality data with each given data type. For correctness, prompt design is critical for the data generation results. Rather than previous hand-crafted prompt, we propose an Interactive Prompt Optimization strategy, which optimizes the prompt with the multi-round interaction between human and GPT, and improve the correctness of generated data greatly. Through extensive experiments, we find our MLLM-DataEngine could boost the MLLM capability in a targeted and automatic manner, with only a few human participation. We hope it could be a general solution for the following MLLMs building. The MLLM-DataEngine has been open-sourced and is now available at https://github.com/opendatalab/MLLM-DataEngine.

* Code and models are available at https://github.com/opendatalab/MLLM-DataEngine 
Viaarxiv icon

Filler Word Detection with Hard Category Mining and Inter-Category Focal Loss

Apr 12, 2023
Zhiyuan Zhao, Lijun Wu, Chuanxin Tang, Dacheng Yin, Yucheng Zhao, Chong Luo

Figure 1 for Filler Word Detection with Hard Category Mining and Inter-Category Focal Loss
Figure 2 for Filler Word Detection with Hard Category Mining and Inter-Category Focal Loss
Figure 3 for Filler Word Detection with Hard Category Mining and Inter-Category Focal Loss
Figure 4 for Filler Word Detection with Hard Category Mining and Inter-Category Focal Loss

Filler words like ``um" or ``uh" are common in spontaneous speech. It is desirable to automatically detect and remove them in recordings, as they affect the fluency, confidence, and professionalism of speech. Previous studies and our preliminary experiments reveal that the biggest challenge in filler word detection is that fillers can be easily confused with other hard categories like ``a" or ``I". In this paper, we propose a novel filler word detection method that effectively addresses this challenge by adding auxiliary categories dynamically and applying an additional inter-category focal loss. The auxiliary categories force the model to explicitly model the confusing words by mining hard categories. In addition, inter-category focal loss adaptively adjusts the penalty weight between ``filler" and ``non-filler" categories to deal with other confusing words left in the ``non-filler" category. Our system achieves the best results, with a huge improvement compared to other methods on the PodcastFillers dataset.

* accepted by ICASSP23 
Viaarxiv icon

TridentSE: Guiding Speech Enhancement with 32 Global Tokens

Oct 24, 2022
Dacheng Yin, Zhiyuan Zhao, Chuanxin Tang, Zhiwei Xiong, Chong Luo

Figure 1 for TridentSE: Guiding Speech Enhancement with 32 Global Tokens
Figure 2 for TridentSE: Guiding Speech Enhancement with 32 Global Tokens
Figure 3 for TridentSE: Guiding Speech Enhancement with 32 Global Tokens
Figure 4 for TridentSE: Guiding Speech Enhancement with 32 Global Tokens

In this paper, we present TridentSE, a novel architecture for speech enhancement, which is capable of efficiently capturing both global information and local details. TridentSE maintains T-F bin level representation to capture details, and uses a small number of global tokens to process the global information. Information is propagated between the local and the global representations through cross attention modules. To capture both inter- and intra-frame information, the global tokens are divided into two groups to process along the time and the frequency axis respectively. A metric discriminator is further employed to guide our model to achieve higher perceptual quality. Even with significantly lower computational cost, TridentSE outperforms a variety of previous speech enhancement methods, achieving a PESQ of 3.47 on VoiceBank+DEMAND dataset and a PESQ of 3.44 on DNS no-reverb test set. Visualization shows that the global tokens learn diverse and interpretable global patterns.

* 5 pages, 2 figures, 3 tables 
Viaarxiv icon

Exploring Effective Knowledge Transfer for Few-shot Object Detection

Oct 05, 2022
Zhiyuan Zhao, Qingjie Liu, Yunhong Wang

Figure 1 for Exploring Effective Knowledge Transfer for Few-shot Object Detection
Figure 2 for Exploring Effective Knowledge Transfer for Few-shot Object Detection
Figure 3 for Exploring Effective Knowledge Transfer for Few-shot Object Detection
Figure 4 for Exploring Effective Knowledge Transfer for Few-shot Object Detection

Recently, few-shot object detection~(FSOD) has received much attention from the community, and many methods are proposed to address this problem from a knowledge transfer perspective. Though promising results have been achieved, these methods fail to achieve shot-stable:~methods that excel in low-shot regimes are likely to struggle in high-shot regimes, and vice versa. We believe this is because the primary challenge of FSOD changes when the number of shots varies. In the low-shot regime, the primary challenge is the lack of inner-class variation. In the high-shot regime, as the variance approaches the real one, the main hindrance to the performance comes from misalignment between learned and true distributions. However, these two distinct issues remain unsolved in most existing FSOD methods. In this paper, we propose to overcome these challenges by exploiting rich knowledge the model has learned and effectively transferring them to the novel classes. For the low-shot regime, we propose a distribution calibration method to deal with the lack of inner-class variation problem. Meanwhile, a shift compensation method is proposed to compensate for possible distribution shift during fine-tuning. For the high-shot regime, we propose to use the knowledge learned from ImageNet as guidance for the feature learning in the fine-tuning stage, which will implicitly align the distributions of the novel classes. Although targeted toward different regimes, these two strategies can work together to further improve the FSOD performance. Experiments on both the VOC and COCO benchmarks show that our proposed method can significantly outperform the baseline method and produce competitive results in both low-shot settings (shot<5) and high-shot settings (shot>=5). Code is available at https://github.com/JulioZhao97/EffTrans_Fsdet.git.

* 9 pages, 6 figures, accepted by ACM Multimedia 2022 
Viaarxiv icon

PetLock:A Genderless and Standard Interface for the Future On-orbit Construction

Sep 09, 2022
Yuntao Li, Zichun Xu, Xiaohang Yang, Zhiyuan Zhao, Jingdong Zhao, Hong Liu

Figure 1 for PetLock:A Genderless and Standard Interface for the Future On-orbit Construction
Figure 2 for PetLock:A Genderless and Standard Interface for the Future On-orbit Construction
Figure 3 for PetLock:A Genderless and Standard Interface for the Future On-orbit Construction
Figure 4 for PetLock:A Genderless and Standard Interface for the Future On-orbit Construction

Modular design is the foundation of on orbit construction technology of large space facilities in the future.Standard interface is the key technology of modular design of the future space robotic systems and space facilities.This paper presents the designed and tested of PetLock,a standard and genderless interface which can transfer mechanical loads,power and data between the future modular space robotic manipulator and spacecraft.PetLock adopts a completely genderless design,including connection face,locking mechanism,data and power interface.The connection surface provides a large translation and rotation misalignment tolerance,due to its 120-degree symmetrical and 3D shape design.The locking mechanism features the three locking pins retraction structure design,which is simple and reliable.POGO pin connectors in the center of the interface provides the power and data transfer capabilities.Due to the advantages of high locking force,large tolerance,high reliability and low cost,PetLock has the very big application potential in future on orbit construction missions.

* 8 pages,11 figures 
Viaarxiv icon