Alert button
Picture for Ling Cheng

Ling Cheng

Alert button

Examining the Effect of Pre-training on Time Series Classification

Sep 11, 2023
Jiashu Pu, Shiwei Zhao, Ling Cheng, Yongzhu Chang, Runze Wu, Tangjie Lv, Rongsheng Zhang

Figure 1 for Examining the Effect of Pre-training on Time Series Classification
Figure 2 for Examining the Effect of Pre-training on Time Series Classification
Figure 3 for Examining the Effect of Pre-training on Time Series Classification
Figure 4 for Examining the Effect of Pre-training on Time Series Classification

Although the pre-training followed by fine-tuning paradigm is used extensively in many fields, there is still some controversy surrounding the impact of pre-training on the fine-tuning process. Currently, experimental findings based on text and image data lack consensus. To delve deeper into the unsupervised pre-training followed by fine-tuning paradigm, we have extended previous research to a new modality: time series. In this study, we conducted a thorough examination of 150 classification datasets derived from the Univariate Time Series (UTS) and Multivariate Time Series (MTS) benchmarks. Our analysis reveals several key conclusions. (i) Pre-training can only help improve the optimization process for models that fit the data poorly, rather than those that fit the data well. (ii) Pre-training does not exhibit the effect of regularization when given sufficient training time. (iii) Pre-training can only speed up convergence if the model has sufficient ability to fit the data. (iv) Adding more pre-training data does not improve generalization, but it can strengthen the advantage of pre-training on the original data volume, such as faster convergence. (v) While both the pre-training task and the model structure determine the effectiveness of the paradigm on a given dataset, the model structure plays a more significant role.

Viaarxiv icon

Evolve Path Tracer: Early Detection of Malicious Addresses in Cryptocurrency

Jan 13, 2023
Ling Cheng, Feida Zhu, Yong Wang, Ruicheng Liang, Huiwen Liu

Figure 1 for Evolve Path Tracer: Early Detection of Malicious Addresses in Cryptocurrency
Figure 2 for Evolve Path Tracer: Early Detection of Malicious Addresses in Cryptocurrency
Figure 3 for Evolve Path Tracer: Early Detection of Malicious Addresses in Cryptocurrency
Figure 4 for Evolve Path Tracer: Early Detection of Malicious Addresses in Cryptocurrency

With the ever-increasing boom of Cryptocurrency, detecting fraudulent behaviors and associated malicious addresses draws significant research effort. However, most existing studies still rely on the full history features or full-fledged address transaction networks, thus cannot meet the requirements of early malicious address detection, which is urgent but seldom discussed by existing studies. To detect fraud behaviors of malicious addresses in the early stage, we present Evolve Path Tracer, which consists of Evolve Path Encoder LSTM, Evolve Path Graph GCN, and Hierarchical Survival Predictor. Specifically, in addition to the general address features, we propose asset transfer paths and corresponding path graphs to characterize early transaction patterns. Further, since the transaction patterns are changing rapidly during the early stage, we propose Evolve Path Encoder LSTM and Evolve Path Graph GCN to encode asset transfer path and path graph under an evolving structure setting. Hierarchical Survival Predictor then predicts addresses' labels with nice scalability and faster prediction speed. We investigate the effectiveness and versatility of Evolve Path Tracer on three real-world illicit bitcoin datasets. Our experimental results demonstrate that Evolve Path Tracer outperforms the state-of-the-art methods. Extensive scalability experiments demonstrate the model's adaptivity under a dynamic prediction setting.

Viaarxiv icon

Toward Intention Discovery for Early Malice Detection in Bitcoin

Sep 24, 2022
Ling Cheng, Feida Zhu, Yong Wang, Huiwen Liu

Figure 1 for Toward Intention Discovery for Early Malice Detection in Bitcoin
Figure 2 for Toward Intention Discovery for Early Malice Detection in Bitcoin
Figure 3 for Toward Intention Discovery for Early Malice Detection in Bitcoin
Figure 4 for Toward Intention Discovery for Early Malice Detection in Bitcoin

Bitcoin has been subject to illicit activities more often than probably any other financial assets, due to the pseudo-anonymous nature of its transacting entities. An ideal detection model is expected to achieve all the three properties of (I) early detection, (II) good interpretability, and (III) versatility for various illicit activities. However, existing solutions cannot meet all these requirements, as most of them heavily rely on deep learning without satisfying interpretability and are only available for retrospective analysis of a specific illicit type. First, we present asset transfer paths, which aim to describe addresses' early characteristics. Next, with a decision tree based strategy for feature selection and segmentation, we split the entire observation period into different segments and encode each as a segment vector. After clustering all these segment vectors, we get the global status vectors, essentially the basic unit to describe the whole intention. Finally, a hierarchical self-attention predictor predicts the label for the given address in real time. A survival module tells the predictor when to stop and proposes the status sequence, namely intention. % With the type-dependent selection strategy and global status vectors, our model can be applied to detect various illicit activities with strong interpretability. The well-designed predictor and particular loss functions strengthen the model's prediction speed and interpretability one step further. Extensive experiments on three real-world datasets show that our proposed algorithm outperforms state-of-the-art methods. Besides, additional case studies justify our model can not only explain existing illicit patterns but can also find new suspicious characters.

Viaarxiv icon

Watermark-Based Code Construction for Finite-State Markov Channel with Synchronisation Errors

Jul 20, 2022
Shamin Achari, Ling Cheng

Figure 1 for Watermark-Based Code Construction for Finite-State Markov Channel with Synchronisation Errors
Figure 2 for Watermark-Based Code Construction for Finite-State Markov Channel with Synchronisation Errors
Figure 3 for Watermark-Based Code Construction for Finite-State Markov Channel with Synchronisation Errors
Figure 4 for Watermark-Based Code Construction for Finite-State Markov Channel with Synchronisation Errors

With advancements in telecommunications, data transmission over increasingly harsher channels that produce synchronisation errors is inevitable. Coding schemes for such channels are available through techniques such as the Davey-MacKay watermark coding; however, this is limited to memoryless channel estimates. Memory must be accounted for to ensure a realistic channel approximation - similar to a Finite State Markov Chain or Fritchman Model. A novel code construction and decoder are developed to correct synchronisation errors while considering the channel's correlated memory effects by incorporating ideas from the watermark scheme and memory modelling. Simulation results show that the proposed code construction and decoder rival the first and second-order Davey-MacKay type watermark decoder and even perform slightly better when the inner-channel capacity is higher than 0.9. The proposed system and decoder may prove helpful in fields such as free-space optics and possibly molecular communication, where harsh channels are used for communication.

* Submitted to Elsevier Digital Signal Processing 
Viaarxiv icon

Geometry-Entangled Visual Semantic Transformer for Image Captioning

Sep 29, 2021
Ling Cheng, Wei Wei, Feida Zhu, Yong Liu, Chunyan Miao

Figure 1 for Geometry-Entangled Visual Semantic Transformer for Image Captioning
Figure 2 for Geometry-Entangled Visual Semantic Transformer for Image Captioning
Figure 3 for Geometry-Entangled Visual Semantic Transformer for Image Captioning
Figure 4 for Geometry-Entangled Visual Semantic Transformer for Image Captioning

Recent advancements of image captioning have featured Visual-Semantic Fusion or Geometry-Aid attention refinement. However, those fusion-based models, they are still criticized for the lack of geometry information for inter and intra attention refinement. On the other side, models based on Geometry-Aid attention still suffer from the modality gap between visual and semantic information. In this paper, we introduce a novel Geometry-Entangled Visual Semantic Transformer (GEVST) network to realize the complementary advantages of Visual-Semantic Fusion and Geometry-Aid attention refinement. Concretely, a Dense-Cap model proposes some dense captions with corresponding geometry information at first. Then, to empower GEVST with the ability to bridge the modality gap among visual and semantic information, we build four parallel transformer encoders VV(Pure Visual), VS(Semantic fused to Visual), SV(Visual fused to Semantic), SS(Pure Semantic) for final caption generation. Both visual and semantic geometry features are used in the Fusion module and also the Self-Attention module for better attention measurement. To validate our model, we conduct extensive experiments on the MS-COCO dataset, the experimental results show that our GEVST model can obtain promising performance gains.

Viaarxiv icon

Self-Synchronising On-Off-Keying Visible Light Communication System For Intra and Inter-Vehicle Data Transmission

Jan 13, 2021
Shamin Achari, Alice Yi Yang, James Goodhead, Brendon Swanepoel, Ling Cheng

Figure 1 for Self-Synchronising On-Off-Keying Visible Light Communication System For Intra and Inter-Vehicle Data Transmission
Figure 2 for Self-Synchronising On-Off-Keying Visible Light Communication System For Intra and Inter-Vehicle Data Transmission
Figure 3 for Self-Synchronising On-Off-Keying Visible Light Communication System For Intra and Inter-Vehicle Data Transmission
Figure 4 for Self-Synchronising On-Off-Keying Visible Light Communication System For Intra and Inter-Vehicle Data Transmission

Visible Light Communication (VLC) is a current technology which allows data to be transmitted by modulating information onto a light source. It has many advantages over traditional radio frequency communication and up to 10,000 times larger bandwidth. Existing research in visible light communication assumes a synchronised channel, however, this is not always easily achieved. In this paper, a novel synchronised intra and inter-vehicle VLC system is proposed to ensure reliable communication in both inter and intra-vehicle communication for Infotainment Systems (IS). The protocol achieves synchronisation at the symbol level using the transistor-transistor logic protocol and achieves frame synchronisations with markers. Consequently, the deployment of the protocol in both inter and intra-vehicle communication presents numerous advantages over existing data transmission processes. A practical application, where VLC is used for media streaming is also previewed. In addition, various regions of possible data transmission are determined with the intention to infer forward error correction schemes to ensure reliable communication.

* Submitted to International Journal of Communication Systems (Wiley). 23 pages 
Viaarxiv icon

Solving MKP Applied to IoT in Smart Grid Using Meta-heuristics Algorithms: A Parallel Processing Perspective

Jun 29, 2020
Jandre Albertyn, Ling Cheng, Adnan M. Abu-Mahfouz

Figure 1 for Solving MKP Applied to IoT in Smart Grid Using Meta-heuristics Algorithms: A Parallel Processing Perspective

Increasing electricity prices in South Africa and the imminent threat of load shedding due to the overloaded power grid has led to a need for Demand Side Management (DSM) devices like smart grids. For smart grids to perform to their peak, their energy management controller (EMC) systems need to be optimized. Current solutions for DSM and optimization of the Multiple Knapsack Problem (MKP) have been investigated in this paper to discover the current state of common DSM models. Solutions from other NP-Hard problems in the form of the iterative Discrete Flower Pollination Algorithm (iDFPA) as well as possible future scalability options in the form of optimization through parallelization have also been suggested.

Viaarxiv icon

A multivariate water quality parameter prediction model using recurrent neural network

Mar 25, 2020
Dhruti Dheda, Ling Cheng

Figure 1 for A multivariate water quality parameter prediction model using recurrent neural network
Figure 2 for A multivariate water quality parameter prediction model using recurrent neural network
Figure 3 for A multivariate water quality parameter prediction model using recurrent neural network
Figure 4 for A multivariate water quality parameter prediction model using recurrent neural network

The global degradation of water resources is a matter of great concern, especially for the survival of humanity. The effective monitoring and management of existing water resources is necessary to achieve and maintain optimal water quality. The prediction of the quality of water resources will aid in the timely identification of possible problem areas and thus increase the efficiency of water management. The purpose of this research is to develop a water quality prediction model based on water quality parameters through the application of a specialised recurrent neural network (RNN), Long Short-Term Memory (LSTM) and the use of historical water quality data over several years. Both multivariate single and multiple step LSTM models were developed, using a Rectified Linear Unit (ReLU) activation function and a Root Mean Square Propagation (RMSprop) optimiser was developed. The single step model attained an error of 0.01 mg/L, whilst the multiple step model achieved a Root Mean Squared Error (RMSE) of 0.227 mg/L.

* 7 pages, 5 figures, 2 tables, submitted to the FUSION 2020 conference for review 
Viaarxiv icon

Two-Step Surface Damage Detection Scheme using Convolutional Neural Network and Artificial Neural Neural

Mar 24, 2020
Alice Yi Yang, Ling Cheng

Figure 1 for Two-Step Surface Damage Detection Scheme using Convolutional Neural Network and Artificial Neural Neural
Figure 2 for Two-Step Surface Damage Detection Scheme using Convolutional Neural Network and Artificial Neural Neural
Figure 3 for Two-Step Surface Damage Detection Scheme using Convolutional Neural Network and Artificial Neural Neural
Figure 4 for Two-Step Surface Damage Detection Scheme using Convolutional Neural Network and Artificial Neural Neural

Surface damage on concrete is important as the damage can affect the structural integrity of the structure. This paper proposes a two-step surface damage detection scheme using Convolutional Neural Network (CNN) and Artificial Neural Network (ANN). The CNN classifies given input images into two categories: positive and negative. The positive category is where the surface damage is present within the image, otherwise the image is classified as negative. This is an image-based classification. The ANN accepts image inputs that have been classified as positive by the ANN. This reduces the number of images that are further processed by the ANN. The ANN performs feature-based classification, in which the features are extracted from the detected edges within the image. The edges are detected using Canny edge detection. A total of 19 features are extracted from the detected edges. These features are inputs into the ANN. The purpose of the ANN is to highlight only the positive damaged edges within the image. The CNN achieves an accuracy of 80.7% for image classification and the ANN achieves an accuracy of 98.1% for surface detection. The decreased accuracy in the CNN is due to the false positive detection, however false positives are tolerated whereas false negatives are not. The false negative detection for both CNN and ANN in the two-step scheme are 0%.

Viaarxiv icon

Optimal DG allocation and sizing in power system networks using swarm-based algorithms

Feb 19, 2020
Kayode Adetunji, Ivan Hofsajer, Ling Cheng

Figure 1 for Optimal DG allocation and sizing in power system networks using swarm-based algorithms
Figure 2 for Optimal DG allocation and sizing in power system networks using swarm-based algorithms
Figure 3 for Optimal DG allocation and sizing in power system networks using swarm-based algorithms
Figure 4 for Optimal DG allocation and sizing in power system networks using swarm-based algorithms

Distributed generation (DG) units are power generating plants that are very important to the architecture of present power system networks. The benefit of the addition of these DG units is to increase the power supply to a network. However, the installation of these DG units can cause an adverse effect if not properly allocated and/or sized. Therefore, there is a need to optimally allocate and size them to avoid cases such as voltage instability and expensive investment costs. In this paper, two swarm-based meta-heuristic algorithms, particle swarm optimization (PSO) and whale optimization algorithm (WOA) were developed to solve optimal placement and sizing of DG units in the quest for transmission network planning. A supportive technique, loss sensitivity factors (LSF) was used to identify potential buses for optimal location of DG units. The feasibility of the algorithms was confirmed on two IEEE bus test systems (14- and 30-bus). Comparison results showed that both algorithms produce good solutions and they outperform each other in different metrics. The WOA real power loss reduction considering techno-economic factors in the IEEE 14-bus and 30-bus test system are 6.14 MW and 10.77 MW, compared to the PSOs' 6.47 MW and 11.73 MW respectively. The PSO has a more reduced total DG unit size in both bus systems with 133.45 MW and 82.44 MW compared to WOAs' 152.21 MW and 82.44 MW respectively. The paper unveils the strengths and weaknesses of the PSO and the WOA in the application of optimal sizing of DG units in transmission networks.

Viaarxiv icon