Alert button
Picture for Hongyan Li

Hongyan Li

Alert button

TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series

Aug 16, 2023
Chenxi Sun, Yaliang Li, Hongyan Li, Shenda Hong

Figure 1 for TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series
Figure 2 for TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series
Figure 3 for TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series
Figure 4 for TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series

This work summarizes two strategies for completing time-series (TS) tasks using today's language model (LLM): LLM-for-TS, design and train a fundamental large model for TS data; TS-for-LLM, enable the pre-trained LLM to handle TS data. Considering the insufficient data accumulation, limited resources, and semantic context requirements, this work focuses on TS-for-LLM methods, where we aim to activate LLM's ability for TS data by designing a TS embedding method suitable for LLM. The proposed method is named TEST. It first tokenizes TS, builds an encoder to embed them by instance-wise, feature-wise, and text-prototype-aligned contrast, and then creates prompts to make LLM more open to embeddings, and finally implements TS tasks. Experiments are carried out on TS classification and forecasting tasks using 8 LLMs with different structures and sizes. Although its results cannot significantly outperform the current SOTA models customized for TS tasks, by treating LLM as the pattern machine, it can endow LLM's ability to process TS data without compromising the language ability. This paper is intended to serve as a foundational work that will inspire further research.

* 10 pages, 6 figures 
Viaarxiv icon

A model-data asymptotic-preserving neural network method based on micro-macro decomposition for gray radiative transfer equations

Dec 11, 2022
Hongyan Li, Song Jiang, Wenjun Sun, Liwei Xu, Guanyu Zhou

Figure 1 for A model-data asymptotic-preserving neural network method based on micro-macro decomposition for gray radiative transfer equations
Figure 2 for A model-data asymptotic-preserving neural network method based on micro-macro decomposition for gray radiative transfer equations
Figure 3 for A model-data asymptotic-preserving neural network method based on micro-macro decomposition for gray radiative transfer equations
Figure 4 for A model-data asymptotic-preserving neural network method based on micro-macro decomposition for gray radiative transfer equations

We propose a model-data asymptotic-preserving neural network(MD-APNN) method to solve the nonlinear gray radiative transfer equations(GRTEs). The system is challenging to be simulated with both the traditional numerical schemes and the vanilla physics-informed neural networks(PINNs) due to the multiscale characteristics. Under the framework of PINNs, we employ a micro-macro decomposition technique to construct a new asymptotic-preserving(AP) loss function, which includes the residual of the governing equations in the micro-macro coupled form, the initial and boundary conditions with additional diffusion limit information, the conservation laws, and a few labeled data. A convergence analysis is performed for the proposed method, and a number of numerical examples are presented to illustrate the efficiency of MD-APNNs, and particularly, the importance of the AP property in the neural networks for the diffusion dominating problems. The numerical results indicate that MD-APNNs lead to a better performance than APNNs or pure data-driven networks in the simulation of the nonlinear non-stationary GRTEs.

Viaarxiv icon

Continuous Diagnosis and Prognosis by Controlling the Update Process of Deep Neural Networks

Oct 06, 2022
Chenxi Sun, Hongyan Li, Moxian Song, Derun Cai, Baofeng Zhang, Shenda Hong

Figure 1 for Continuous Diagnosis and Prognosis by Controlling the Update Process of Deep Neural Networks
Figure 2 for Continuous Diagnosis and Prognosis by Controlling the Update Process of Deep Neural Networks
Figure 3 for Continuous Diagnosis and Prognosis by Controlling the Update Process of Deep Neural Networks
Figure 4 for Continuous Diagnosis and Prognosis by Controlling the Update Process of Deep Neural Networks

Continuous diagnosis and prognosis are essential for intensive care patients. It can provide more opportunities for timely treatment and rational resource allocation, especially for sepsis, a main cause of death in ICU, and COVID-19, a new worldwide epidemic. Although deep learning methods have shown their great superiority in many medical tasks, they tend to catastrophically forget, over fit, and get results too late when performing diagnosis and prognosis in the continuous mode. In this work, we summarized the three requirements of this task, proposed a new concept, continuous classification of time series (CCTS), and designed a novel model training method, restricted update strategy of neural networks (RU). In the context of continuous prognosis, our method outperformed all baselines and achieved the average accuracy of 90%, 97%, and 85% on sepsis prognosis, COVID-19 mortality prediction, and eight diseases classification. Superiorly, our method can also endow deep learning with interpretability, having the potential to explore disease mechanisms and provide a new horizon for medical research. We have achieved disease staging for sepsis and COVID-19, discovering four stages and three stages with their typical biomarkers respectively. Further, our method is a data-agnostic and model-agnostic plug-in, it can be used to continuously prognose other diseases with staging and even implement CCTS in other fields.

* 41 pages, 15 figures 
Viaarxiv icon

Confidence-Guided Learning Process for Continuous Classification of Time Series

Aug 14, 2022
Chenxi Sun, Moxian Song, Derun Can, Baofeng Zhang, Shenda Hong, Hongyan Li

Figure 1 for Confidence-Guided Learning Process for Continuous Classification of Time Series
Figure 2 for Confidence-Guided Learning Process for Continuous Classification of Time Series
Figure 3 for Confidence-Guided Learning Process for Continuous Classification of Time Series
Figure 4 for Confidence-Guided Learning Process for Continuous Classification of Time Series

In the real world, the class of a time series is usually labeled at the final time, but many applications require to classify time series at every time point. e.g. the outcome of a critical patient is only determined at the end, but he should be diagnosed at all times for timely treatment. Thus, we propose a new concept: Continuous Classification of Time Series (CCTS). It requires the model to learn data in different time stages. But the time series evolves dynamically, leading to different data distributions. When a model learns multi-distribution, it always forgets or overfits. We suggest that meaningful learning scheduling is potential due to an interesting observation: Measured by confidence, the process of model learning multiple distributions is similar to the process of human learning multiple knowledge. Thus, we propose a novel Confidence-guided method for CCTS (C3TS). It can imitate the alternating human confidence described by the Dunning-Kruger Effect. We define the objective- confidence to arrange data, and the self-confidence to control the learning duration. Experiments on four real-world datasets show that C3TS is more accurate than all baselines for CCTS.

* 20 pages, 12 figures 
Viaarxiv icon

Optical Flow for Video Super-Resolution: A Survey

Mar 20, 2022
Zhigang Tu, Hongyan Li, Wei Xie, Yuanzhong Liu, Shifu Zhang, Baoxin Li, Junsong Yuan

Figure 1 for Optical Flow for Video Super-Resolution: A Survey
Figure 2 for Optical Flow for Video Super-Resolution: A Survey
Figure 3 for Optical Flow for Video Super-Resolution: A Survey
Figure 4 for Optical Flow for Video Super-Resolution: A Survey

Video super-resolution is currently one of the most active research topics in computer vision as it plays an important role in many visual applications. Generally, video super-resolution contains a significant component, i.e., motion compensation, which is used to estimate the displacement between successive video frames for temporal alignment. Optical flow, which can supply dense and sub-pixel motion between consecutive frames, is among the most common ways for this task. To obtain a good understanding of the effect that optical flow acts in video super-resolution, in this work, we conduct a comprehensive review on this subject for the first time. This investigation covers the following major topics: the function of super-resolution (i.e., why we require super-resolution); the concept of video super-resolution (i.e., what is video super-resolution); the description of evaluation metrics (i.e., how (video) superresolution performs); the introduction of optical flow based video super-resolution; the investigation of using optical flow to capture temporal dependency for video super-resolution. Prominently, we give an in-depth study of the deep learning based video super-resolution method, where some representative algorithms are analyzed and compared. Additionally, we highlight some promising research directions and open issues that should be further addressed.

Viaarxiv icon

Slow-Fast Visual Tempo Learning for Video-based Action Recognition

Feb 24, 2022
Yuanzhong Liu, Zhigang Tu, Hongyan Li, Chi Chen, Baoxin Li, Junsong Yuan

Figure 1 for Slow-Fast Visual Tempo Learning for Video-based Action Recognition
Figure 2 for Slow-Fast Visual Tempo Learning for Video-based Action Recognition
Figure 3 for Slow-Fast Visual Tempo Learning for Video-based Action Recognition
Figure 4 for Slow-Fast Visual Tempo Learning for Video-based Action Recognition

Action visual tempo characterizes the dynamics and the temporal scale of an action, which is helpful to distinguish human actions that share high similarities in visual dynamics and appearance. Previous methods capture the visual tempo either by sampling raw videos with multiple rates, which requires a costly multi-layer network to handle each rate, or by hierarchically sampling backbone features, which relies heavily on high-level features that miss fine-grained temporal dynamics. In this work, we propose a Temporal Correlation Module (TCM), which can be easily embedded into the current action recognition backbones in a plug-in-and-play manner, to extract action visual tempo from low-level backbone features at single-layer remarkably. Specifically, our TCM contains two main components: a Multi-scale Temporal Dynamics Module (MTDM) and a Temporal Attention Module (TAM). MTDM applies a correlation operation to learn pixel-wise fine-grained temporal dynamics for both fast-tempo and slow-tempo. TAM adaptively emphasizes expressive features and suppresses inessential ones via analyzing the global information across various tempos. Extensive experiments conducted on several action recognition benchmarks, e.g. Something-Something V1 & V2, Kinetics-400, UCF-101, and HMDB-51, have demonstrated that the proposed TCM is effective to promote the performance of the existing video-based action recognition models for a large margin. The source code is publicly released at https://github.com/zphyix/TCM.

Viaarxiv icon

Joint-bone Fusion Graph Convolutional Network for Semi-supervised Skeleton Action Recognition

Feb 08, 2022
Zhigang Tu, Jiaxu Zhang, Hongyan Li, Yujin Chen, Junsong Yuan

Figure 1 for Joint-bone Fusion Graph Convolutional Network for Semi-supervised Skeleton Action Recognition
Figure 2 for Joint-bone Fusion Graph Convolutional Network for Semi-supervised Skeleton Action Recognition
Figure 3 for Joint-bone Fusion Graph Convolutional Network for Semi-supervised Skeleton Action Recognition
Figure 4 for Joint-bone Fusion Graph Convolutional Network for Semi-supervised Skeleton Action Recognition

In recent years, graph convolutional networks (GCNs) play an increasingly critical role in skeleton-based human action recognition. However, most GCN-based methods still have two main limitations: 1) They only consider the motion information of the joints or process the joints and bones separately, which are unable to fully explore the latent functional correlation between joints and bones for action recognition. 2) Most of these works are performed in the supervised learning way, which heavily relies on massive labeled training data. To address these issues, we propose a semi-supervised skeleton-based action recognition method which has been rarely exploited before. We design a novel correlation-driven joint-bone fusion graph convolutional network (CD-JBF-GCN) as an encoder and use a pose prediction head as a decoder to achieve semi-supervised learning. Specifically, the CD-JBF-GC can explore the motion transmission between the joint stream and the bone stream, so that promoting both streams to learn more discriminative feature representations. The pose prediction based auto-encoder in the self-supervised training stage allows the network to learn motion representation from unlabeled data, which is essential for action recognition. Extensive experiments on two popular datasets, i.e. NTU-RGB+D and Kinetics-Skeleton, demonstrate that our model achieves the state-of-the-art performance for semi-supervised skeleton-based action recognition and is also useful for fully-supervised methods.

Viaarxiv icon

GRP-FED: Addressing Client Imbalance in Federated Learning via Global-Regularized Personalization

Aug 31, 2021
Yen-Hsiu Chou, Shenda Hong, Chenxi Sun, Derun Cai, Moxian Song, Hongyan Li

Figure 1 for GRP-FED: Addressing Client Imbalance in Federated Learning via Global-Regularized Personalization
Figure 2 for GRP-FED: Addressing Client Imbalance in Federated Learning via Global-Regularized Personalization
Figure 3 for GRP-FED: Addressing Client Imbalance in Federated Learning via Global-Regularized Personalization
Figure 4 for GRP-FED: Addressing Client Imbalance in Federated Learning via Global-Regularized Personalization

Since data is presented long-tailed in reality, it is challenging for Federated Learning (FL) to train across decentralized clients as practical applications. We present Global-Regularized Personalization (GRP-FED) to tackle the data imbalanced issue by considering a single global model and multiple local models for each client. With adaptive aggregation, the global model treats multiple clients fairly and mitigates the global long-tailed issue. Each local model is learned from the local data and aligns with its distribution for customization. To prevent the local model from just overfitting, GRP-FED applies an adversarial discriminator to regularize between the learned global-local features. Extensive results show that our GRP-FED improves under both global and local scenarios on real-world MIT-BIH and synthesis CIFAR-10 datasets, achieving comparable performance and addressing client imbalance.

* (FL-ICML'21) International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2021 
Viaarxiv icon

TE-ESN: Time Encoding Echo State Network for Prediction Based on Irregularly Sampled Time Series Data

May 02, 2021
Chenxi Sun, Shenda Hong, Moxian Song, Yanxiu Zhou, Yongyue Sun, Derun Cai, Hongyan Li

Figure 1 for TE-ESN: Time Encoding Echo State Network for Prediction Based on Irregularly Sampled Time Series Data
Figure 2 for TE-ESN: Time Encoding Echo State Network for Prediction Based on Irregularly Sampled Time Series Data
Figure 3 for TE-ESN: Time Encoding Echo State Network for Prediction Based on Irregularly Sampled Time Series Data
Figure 4 for TE-ESN: Time Encoding Echo State Network for Prediction Based on Irregularly Sampled Time Series Data

Prediction based on Irregularly Sampled Time Series (ISTS) is of wide concern in the real-world applications. For more accurate prediction, the methods had better grasp more data characteristics. Different from ordinary time series, ISTS is characterised with irregular time intervals of intra-series and different sampling rates of inter-series. However, existing methods have suboptimal predictions due to artificially introducing new dependencies in a time series and biasedly learning relations among time series when modeling these two characteristics. In this work, we propose a novel Time Encoding (TE) mechanism. TE can embed the time information as time vectors in the complex domain. It has the the properties of absolute distance and relative distance under different sampling rates, which helps to represent both two irregularities of ISTS. Meanwhile, we create a new model structure named Time Encoding Echo State Network (TE-ESN). It is the first ESNs-based model that can process ISTS data. Besides, TE-ESN can incorporate long short-term memories and series fusion to grasp horizontal and vertical relations. Experiments on one chaos system and three real-world datasets show that TE-ESN performs better than all baselines and has better reservoir property.

* 7 pages, 4 figures, accepted by IJCAI 2021 
Viaarxiv icon

Robustness Testing of Language Understanding in Dialog Systems

Dec 30, 2020
Jiexi Liu, Ryuichi Takanobu, Jiaxin Wen, Dazhen Wan, Weiran Nie, Hongyan Li, Cheng Li, Wei Peng, Minlie Huang

Figure 1 for Robustness Testing of Language Understanding in Dialog Systems
Figure 2 for Robustness Testing of Language Understanding in Dialog Systems
Figure 3 for Robustness Testing of Language Understanding in Dialog Systems
Figure 4 for Robustness Testing of Language Understanding in Dialog Systems

Most language understanding models in dialog systems are trained on a small amount of annotated training data, and evaluated in a small set from the same distribution. However, these models can lead to system failure or undesirable outputs when being exposed to natural perturbation in practice. In this paper, we conduct comprehensive evaluation and analysis with respect to the robustness of natural language understanding models, and introduce three important aspects related to language understanding in real-world dialog systems, namely, language variety, speech characteristics, and noise perturbation. We propose a model-agnostic toolkit LAUG to approximate natural perturbation for testing the robustness issues in dialog systems. Four data augmentation approaches covering the three aspects are assembled in LAUG, which reveals critical robustness issues in state-of-the-art models. The augmented dataset through LAUG can be used to facilitate future research on the robustness testing of language understanding in dialog systems.

Viaarxiv icon