Alert button
Picture for Yanyan Shen

Yanyan Shen

Alert button

Methods for Acquiring and Incorporating Knowledge into Stock Price Prediction: A Survey

Aug 09, 2023
Liping Wang, Jiawei Li, Lifan Zhao, Zhizhuo Kou, Xiaohan Wang, Xinyi Zhu, Hao Wang, Yanyan Shen, Lei Chen

Figure 1 for Methods for Acquiring and Incorporating Knowledge into Stock Price Prediction: A Survey
Figure 2 for Methods for Acquiring and Incorporating Knowledge into Stock Price Prediction: A Survey
Figure 3 for Methods for Acquiring and Incorporating Knowledge into Stock Price Prediction: A Survey
Figure 4 for Methods for Acquiring and Incorporating Knowledge into Stock Price Prediction: A Survey

Predicting stock prices presents a challenging research problem due to the inherent volatility and non-linear nature of the stock market. In recent years, knowledge-enhanced stock price prediction methods have shown groundbreaking results by utilizing external knowledge to understand the stock market. Despite the importance of these methods, there is a scarcity of scholarly works that systematically synthesize previous studies from the perspective of external knowledge types. Specifically, the external knowledge can be modeled in different data structures, which we group into non-graph-based formats and graph-based formats: 1) non-graph-based knowledge captures contextual information and multimedia descriptions specifically associated with an individual stock; 2) graph-based knowledge captures interconnected and interdependent information in the stock market. This survey paper aims to provide a systematic and comprehensive description of methods for acquiring external knowledge from various unstructured data sources and then incorporating it into stock price prediction models. We also explore fusion methods for combining external knowledge with historical price features. Moreover, this paper includes a compilation of relevant datasets and delves into potential future research directions in this domain.

Viaarxiv icon

DoubleAdapt: A Meta-learning Approach to Incremental Learning for Stock Trend Forecasting

Jun 16, 2023
Lifan Zhao, Shuming Kong, Yanyan Shen

Figure 1 for DoubleAdapt: A Meta-learning Approach to Incremental Learning for Stock Trend Forecasting
Figure 2 for DoubleAdapt: A Meta-learning Approach to Incremental Learning for Stock Trend Forecasting
Figure 3 for DoubleAdapt: A Meta-learning Approach to Incremental Learning for Stock Trend Forecasting
Figure 4 for DoubleAdapt: A Meta-learning Approach to Incremental Learning for Stock Trend Forecasting

Stock trend forecasting is a fundamental task of quantitative investment where precise predictions of price trends are indispensable. As an online service, stock data continuously arrive over time. It is practical and efficient to incrementally update the forecast model with the latest data which may reveal some new patterns recurring in the future stock market. However, incremental learning for stock trend forecasting still remains under-explored due to the challenge of distribution shifts (a.k.a. concept drifts). With the stock market dynamically evolving, the distribution of future data can slightly or significantly differ from incremental data, hindering the effectiveness of incremental updates. To address this challenge, we propose DoubleAdapt, an end-to-end framework with two adapters, which can effectively adapt the data and the model to mitigate the effects of distribution shifts. Our key insight is to automatically learn how to adapt stock data into a locally stationary distribution in favor of profitable updates. Complemented by data adaptation, we can confidently adapt the model parameters under mitigated distribution shifts. We cast each incremental learning task as a meta-learning task and automatically optimize the adapters for desirable data adaptation and parameter initialization. Experiments on real-world stock datasets demonstrate that DoubleAdapt achieves state-of-the-art predictive performance and shows considerable efficiency.

* Accepted by KDD 2023 
Viaarxiv icon

RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in CTR Prediction

Oct 28, 2022
Yanyan Shen, Lifan Zhao, Weiyu Cheng, Zibin Zhang, Wenwen Zhou, Kangyi Lin

Figure 1 for RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in CTR Prediction
Figure 2 for RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in CTR Prediction
Figure 3 for RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in CTR Prediction
Figure 4 for RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in CTR Prediction

Click-Through Rate (CTR) prediction on cold users is a challenging task in recommender systems. Recent researches have resorted to meta-learning to tackle the cold-user challenge, which either perform few-shot user representation learning or adopt optimization-based meta-learning. However, existing methods suffer from information loss or inefficient optimization process, and they fail to explicitly model global user preference knowledge which is crucial to complement the sparse and insufficient preference information of cold users. In this paper, we propose a novel and efficient approach named RESUS, which decouples the learning of global preference knowledge contributed by collective users from the learning of residual preferences for individual users. Specifically, we employ a shared predictor to infer basis user preferences, which acquires global preference knowledge from the interactions of different users. Meanwhile, we develop two efficient algorithms based on the nearest neighbor and ridge regression predictors, which infer residual user preferences via learning quickly from a few user-specific interactions. Extensive experiments on three public datasets demonstrate that our RESUS approach is efficient and effective in improving CTR prediction accuracy on cold users, compared with various state-of-the-art methods.

* Accepted by TOIS 2022. Code are available in https://github.com/MogicianXD/RESUS 
Viaarxiv icon

Morphological feature visualization of Alzheimer's disease via Multidirectional Perception GAN

Nov 25, 2021
Wen Yu, Baiying Lei, Yanyan Shen, Shuqiang Wang, Yong Liu, Zhiguang Feng, Yong Hu, Michael K. Ng

Figure 1 for Morphological feature visualization of Alzheimer's disease via Multidirectional Perception GAN
Figure 2 for Morphological feature visualization of Alzheimer's disease via Multidirectional Perception GAN
Figure 3 for Morphological feature visualization of Alzheimer's disease via Multidirectional Perception GAN
Figure 4 for Morphological feature visualization of Alzheimer's disease via Multidirectional Perception GAN

The diagnosis of early stages of Alzheimer's disease (AD) is essential for timely treatment to slow further deterioration. Visualizing the morphological features for the early stages of AD is of great clinical value. In this work, a novel Multidirectional Perception Generative Adversarial Network (MP-GAN) is proposed to visualize the morphological features indicating the severity of AD for patients of different stages. Specifically, by introducing a novel multidirectional mapping mechanism into the model, the proposed MP-GAN can capture the salient global features efficiently. Thus, by utilizing the class-discriminative map from the generator, the proposed model can clearly delineate the subtle lesions via MR image transformations between the source domain and the pre-defined target domain. Besides, by integrating the adversarial loss, classification loss, cycle consistency loss and \emph{L}1 penalty, a single generator in MP-GAN can learn the class-discriminative maps for multiple-classes. Extensive experimental results on Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that MP-GAN achieves superior performance compared with the existing methods. The lesions visualized by MP-GAN are also consistent with what clinicians observe.

Viaarxiv icon

A Prior Guided Adversarial Representation Learning and Hypergraph Perceptual Network for Predicting Abnormal Connections of Alzheimer's Disease

Oct 12, 2021
Qiankun Zuo, Baiying Lei, Shuqiang Wang, Yong Liu, Bingchuan Wang, Yanyan Shen

Figure 1 for A Prior Guided Adversarial Representation Learning and Hypergraph Perceptual Network for Predicting Abnormal Connections of Alzheimer's Disease
Figure 2 for A Prior Guided Adversarial Representation Learning and Hypergraph Perceptual Network for Predicting Abnormal Connections of Alzheimer's Disease
Figure 3 for A Prior Guided Adversarial Representation Learning and Hypergraph Perceptual Network for Predicting Abnormal Connections of Alzheimer's Disease
Figure 4 for A Prior Guided Adversarial Representation Learning and Hypergraph Perceptual Network for Predicting Abnormal Connections of Alzheimer's Disease

Alzheimer's disease is characterized by alterations of the brain's structural and functional connectivity during its progressive degenerative processes. Existing auxiliary diagnostic methods have accomplished the classification task, but few of them can accurately evaluate the changing characteristics of brain connectivity. In this work, a prior guided adversarial representation learning and hypergraph perceptual network (PGARL-HPN) is proposed to predict abnormal brain connections using triple-modality medical images. Concretely, a prior distribution from the anatomical knowledge is estimated to guide multimodal representation learning using an adversarial strategy. Also, the pairwise collaborative discriminator structure is further utilized to narrow the difference of representation distribution. Moreover, the hypergraph perceptual network is developed to effectively fuse the learned representations while establishing high-order relations within and between multimodal images. Experimental results demonstrate that the proposed model outperforms other related methods in analyzing and predicting Alzheimer's disease progression. More importantly, the identified abnormal connections are partly consistent with the previous neuroscience discoveries. The proposed model can evaluate characteristics of abnormal brain connections at different stages of Alzheimer's disease, which is helpful for cognitive disease study and early treatment.

Viaarxiv icon

DecGAN: Decoupling Generative Adversarial Network detecting abnormal neural circuits for Alzheimer's disease

Oct 12, 2021
Junren Pan, Baiying Lei, Shuqiang Wang, Bingchuan Wang, Yong Liu, Yanyan Shen

Figure 1 for DecGAN: Decoupling Generative Adversarial Network detecting abnormal neural circuits for Alzheimer's disease
Figure 2 for DecGAN: Decoupling Generative Adversarial Network detecting abnormal neural circuits for Alzheimer's disease
Figure 3 for DecGAN: Decoupling Generative Adversarial Network detecting abnormal neural circuits for Alzheimer's disease
Figure 4 for DecGAN: Decoupling Generative Adversarial Network detecting abnormal neural circuits for Alzheimer's disease

One of the main reasons for Alzheimer's disease (AD) is the disorder of some neural circuits. Existing methods for AD prediction have achieved great success, however, detecting abnormal neural circuits from the perspective of brain networks is still a big challenge. In this work, a novel decoupling generative adversarial network (DecGAN) is proposed to detect abnormal neural circuits for AD. Concretely, a decoupling module is designed to decompose a brain network into two parts: one part is composed of a few sparse graphs which represent the neural circuits largely determining the development of AD; the other part is a supplement graph, whose influence on AD can be ignored. Furthermore, the adversarial strategy is utilized to guide the decoupling module to extract the feature more related to AD. Meanwhile, by encoding the detected neural circuits to hypergraph data, an analytic module associated with the hyperedge neurons algorithm is designed to identify the neural circuits. More importantly, a novel sparse capacity loss based on the spatial-spectral hypergraph similarity is developed to minimize the intrinsic topological distribution of neural circuits, which can significantly improve the accuracy and robustness of the proposed model. Experimental results demonstrate that the proposed model can effectively detect the abnormal neural circuits at different stages of AD, which is helpful for pathological study and early treatment.

Viaarxiv icon

Characterization Multimodal Connectivity of Brain Network by Hypergraph GAN for Alzheimer's Disease Analysis

Jul 21, 2021
Junren Pan, Baiying Lei, Yanyan Shen, Yong Liu, Zhiguang Feng, Shuqiang Wang

Figure 1 for Characterization Multimodal Connectivity of Brain Network by Hypergraph GAN for Alzheimer's Disease Analysis
Figure 2 for Characterization Multimodal Connectivity of Brain Network by Hypergraph GAN for Alzheimer's Disease Analysis
Figure 3 for Characterization Multimodal Connectivity of Brain Network by Hypergraph GAN for Alzheimer's Disease Analysis
Figure 4 for Characterization Multimodal Connectivity of Brain Network by Hypergraph GAN for Alzheimer's Disease Analysis

Using multimodal neuroimaging data to characterize brain network is currently an advanced technique for Alzheimer's disease(AD) Analysis. Over recent years the neuroimaging community has made tremendous progress in the study of resting-state functional magnetic resonance imaging (rs-fMRI) derived from blood-oxygen-level-dependent (BOLD) signals and Diffusion Tensor Imaging (DTI) derived from white matter fiber tractography. However, Due to the heterogeneity and complexity between BOLD signals and fiber tractography, Most existing multimodal data fusion algorithms can not sufficiently take advantage of the complementary information between rs-fMRI and DTI. To overcome this problem, a novel Hypergraph Generative Adversarial Networks(HGGAN) is proposed in this paper, which utilizes Interactive Hyperedge Neurons module (IHEN) and Optimal Hypergraph Homomorphism algorithm(OHGH) to generate multimodal connectivity of Brain Network from rs-fMRI combination with DTI. To evaluate the performance of this model, We use publicly available data from the ADNI database to demonstrate that the proposed model not only can identify discriminative brain regions of AD but also can effectively improve classification performance.

Viaarxiv icon

Multimodal Representations Learning and Adversarial Hypergraph Fusion for Early Alzheimer's Disease Prediction

Jul 21, 2021
Qiankun Zuo, Baiying Lei, Yanyan Shen, Yong Liu, Zhiguang Feng, Shuqiang Wang

Figure 1 for Multimodal Representations Learning and Adversarial Hypergraph Fusion for Early Alzheimer's Disease Prediction
Figure 2 for Multimodal Representations Learning and Adversarial Hypergraph Fusion for Early Alzheimer's Disease Prediction
Figure 3 for Multimodal Representations Learning and Adversarial Hypergraph Fusion for Early Alzheimer's Disease Prediction
Figure 4 for Multimodal Representations Learning and Adversarial Hypergraph Fusion for Early Alzheimer's Disease Prediction

Multimodal neuroimage can provide complementary information about the dementia, but small size of complete multimodal data limits the ability in representation learning. Moreover, the data distribution inconsistency from different modalities may lead to ineffective fusion, which fails to sufficiently explore the intra-modal and inter-modal interactions and compromises the disease diagnosis performance. To solve these problems, we proposed a novel multimodal representation learning and adversarial hypergraph fusion (MRL-AHF) framework for Alzheimer's disease diagnosis using complete trimodal images. First, adversarial strategy and pre-trained model are incorporated into the MRL to extract latent representations from multimodal data. Then two hypergraphs are constructed from the latent representations and the adversarial network based on graph convolution is employed to narrow the distribution difference of hyperedge features. Finally, the hyperedge-invariant features are fused for disease prediction by hyperedge convolution. Experiments on the public Alzheimer's Disease Neuroimaging Initiative(ADNI) database demonstrate that our model achieves superior performance on Alzheimer's disease detection compared with other related models and provides a possible way to understand the underlying mechanisms of disorder's progression by analyzing the abnormal brain connections.

* 13 pages, 3 figures 
Viaarxiv icon

A Point Cloud Generative Model via Tree-Structured Graph Convolutions for 3D Brain Shape Reconstruction

Jul 21, 2021
Bowen Hu, Baiying Lei, Yanyan Shen, Yong Liu, Shuqiang Wang

Figure 1 for A Point Cloud Generative Model via Tree-Structured Graph Convolutions for 3D Brain Shape Reconstruction
Figure 2 for A Point Cloud Generative Model via Tree-Structured Graph Convolutions for 3D Brain Shape Reconstruction
Figure 3 for A Point Cloud Generative Model via Tree-Structured Graph Convolutions for 3D Brain Shape Reconstruction
Figure 4 for A Point Cloud Generative Model via Tree-Structured Graph Convolutions for 3D Brain Shape Reconstruction

Fusing medical images and the corresponding 3D shape representation can provide complementary information and microstructure details to improve the operational performance and accuracy in brain surgery. However, compared to the substantial image data, it is almost impossible to obtain the intraoperative 3D shape information by using physical methods such as sensor scanning, especially in minimally invasive surgery and robot-guided surgery. In this paper, a general generative adversarial network (GAN) architecture based on graph convolutional networks is proposed to reconstruct the 3D point clouds (PCs) of brains by using one single 2D image, thus relieving the limitation of acquiring 3D shape data during surgery. Specifically, a tree-structured generative mechanism is constructed to use the latent vector effectively and transfer features between hidden layers accurately. With the proposed generative model, a spontaneous image-to-PC conversion is finished in real-time. Competitive qualitative and quantitative experimental results have been achieved on our model. In multiple evaluation methods, the proposed model outperforms another common point cloud generative model PointOutNet.

Viaarxiv icon

Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis

Aug 08, 2020
Shengye Hu, Baiying Lei, Yong Wang, Zhiguang Feng, Yanyan Shen, Shuqiang Wang

Figure 1 for Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis
Figure 2 for Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis
Figure 3 for Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis
Figure 4 for Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis

Fusing multi-modality medical images, such as MR and PET, can provide various anatomical or functional information about human body. But PET data is always unavailable due to different reasons such as cost, radiation, or other limitations. In this paper, we propose a 3D end-to-end synthesis network, called Bidirectional Mapping Generative Adversarial Networks (BMGAN), where image contexts and latent vector are effectively used and jointly optimized for brain MR-to-PET synthesis. Concretely, a bidirectional mapping mechanism is designed to embed the semantic information of PET images into the high dimensional latent space. And the 3D DenseU-Net generator architecture and the extensive objective functions are further utilized to improve the visual quality of synthetic results. The most appealing part is that the proposed method can synthesize the perceptually realistic PET images while preserving the diverse brain structures of different subjects. Experimental results demonstrate that the performance of the proposed method outperforms other competitive cross-modality synthesis methods in terms of quantitative measures, qualitative displays, and classification evaluation.

* 12pages, 10 figures 
Viaarxiv icon