Alert button
Picture for Ponnuthurai Nagaratnam Suganthan

Ponnuthurai Nagaratnam Suganthan

Alert button

IF2Net: Innately Forgetting-Free Networks for Continual Learning

Jun 18, 2023
Depeng Li, Tianqi Wang, Bingrong Xu, Kenji Kawaguchi, Zhigang Zeng, Ponnuthurai Nagaratnam Suganthan

Figure 1 for IF2Net: Innately Forgetting-Free Networks for Continual Learning
Figure 2 for IF2Net: Innately Forgetting-Free Networks for Continual Learning
Figure 3 for IF2Net: Innately Forgetting-Free Networks for Continual Learning
Figure 4 for IF2Net: Innately Forgetting-Free Networks for Continual Learning

Continual learning can incrementally absorb new concepts without interfering with previously learned knowledge. Motivated by the characteristics of neural networks, in which information is stored in weights on connections, we investigated how to design an Innately Forgetting-Free Network (IF2Net) for continual learning context. This study proposed a straightforward yet effective learning paradigm by ingeniously keeping the weights relative to each seen task untouched before and after learning a new task. We first presented the novel representation-level learning on task sequences with random weights. This technique refers to tweaking the drifted representations caused by randomization back to their separate task-optimal working states, but the involved weights are frozen and reused (opposite to well-known layer-wise updates of weights). Then, sequential decision-making without forgetting can be achieved by projecting the output weight updates into the parsimonious orthogonal space, making the adaptations not disturb old knowledge while maintaining model plasticity. IF2Net allows a single network to inherently learn unlimited mapping rules without telling task identities at test time by integrating the respective strengths of randomization and orthogonalization. We validated the effectiveness of our approach in the extensive theoretical analysis and empirical study.

* 16 pages, 8 figures. Under review 
Viaarxiv icon

SFE: A Simple, Fast and Efficient Feature Selection Algorithm for High-Dimensional Data

Mar 17, 2023
Behrouz Ahadzadeh, Moloud Abdar, Fatemeh Safara, Abbas Khosravi, Mohammad Bagher Menhaj, Ponnuthurai Nagaratnam Suganthan

Figure 1 for SFE: A Simple, Fast and Efficient Feature Selection Algorithm for High-Dimensional Data
Figure 2 for SFE: A Simple, Fast and Efficient Feature Selection Algorithm for High-Dimensional Data
Figure 3 for SFE: A Simple, Fast and Efficient Feature Selection Algorithm for High-Dimensional Data
Figure 4 for SFE: A Simple, Fast and Efficient Feature Selection Algorithm for High-Dimensional Data

In this paper, a new feature selection algorithm, called SFE (Simple, Fast, and Efficient), is proposed for high-dimensional datasets. The SFE algorithm performs its search process using a search agent and two operators: non-selection and selection. It comprises two phases: exploration and exploitation. In the exploration phase, the non-selection operator performs a global search in the entire problem search space for the irrelevant, redundant, trivial, and noisy features, and changes the status of the features from selected mode to non-selected mode. In the exploitation phase, the selection operator searches the problem search space for the features with a high impact on the classification results, and changes the status of the features from non-selected mode to selected mode. The proposed SFE is successful in feature selection from high-dimensional datasets. However, after reducing the dimensionality of a dataset, its performance cannot be increased significantly. In these situations, an evolutionary computational method could be used to find a more efficient subset of features in the new and reduced search space. To overcome this issue, this paper proposes a hybrid algorithm, SFE-PSO (particle swarm optimization) to find an optimal feature subset. The efficiency and effectiveness of the SFE and the SFE-PSO for feature selection are compared on 40 high-dimensional datasets. Their performances were compared with six recently proposed feature selection algorithms. The results obtained indicate that the two proposed algorithms significantly outperform the other algorithms, and can be used as efficient and effective algorithms in selecting features from high-dimensional datasets.

Viaarxiv icon

Weighting and Pruning based Ensemble Deep Random Vector Functional Link Network for Tabular Data Classification

Jan 21, 2022
Qiushi Shi, Ponnuthurai Nagaratnam Suganthan, Rakesh Katuwal

Figure 1 for Weighting and Pruning based Ensemble Deep Random Vector Functional Link Network for Tabular Data Classification
Figure 2 for Weighting and Pruning based Ensemble Deep Random Vector Functional Link Network for Tabular Data Classification
Figure 3 for Weighting and Pruning based Ensemble Deep Random Vector Functional Link Network for Tabular Data Classification
Figure 4 for Weighting and Pruning based Ensemble Deep Random Vector Functional Link Network for Tabular Data Classification

In this paper, we first introduce batch normalization to the edRVFL network. This re-normalization method can help the network avoid divergence of the hidden features. Then we propose novel variants of Ensemble Deep Random Vector Functional Link (edRVFL). Weighted edRVFL (WedRVFL) uses weighting methods to give training samples different weights in different layers according to how the samples were classified confidently in the previous layer thereby increasing the ensemble's diversity and accuracy. Furthermore, a pruning-based edRVFL (PedRVFL) has also been proposed. We prune some inferior neurons based on their importance for classification before generating the next hidden layer. Through this method, we ensure that the randomly generated inferior features will not propagate to deeper layers. Subsequently, the combination of weighting and pruning, called Weighting and Pruning based Ensemble Deep Random Vector Functional Link Network (WPedRVFL), is proposed. We compare their performances with other state-of-the-art deep feedforward neural networks (FNNs) on 24 tabular UCI classification datasets. The experimental results illustrate the superior performance of our proposed methods.

* 8 tables, 8 figures, 31 pages 
Viaarxiv icon

An Autonomous Path Planning Method for Unmanned Aerial Vehicle based on A Tangent Intersection and Target Guidance Strategy

Jun 07, 2020
Huan Liu, Xiamiao Li, Mingfeng Fan, Guohua Wu, Witold Pedrycz, Ponnuthurai Nagaratnam Suganthan

Figure 1 for An Autonomous Path Planning Method for Unmanned Aerial Vehicle based on A Tangent Intersection and Target Guidance Strategy
Figure 2 for An Autonomous Path Planning Method for Unmanned Aerial Vehicle based on A Tangent Intersection and Target Guidance Strategy
Figure 3 for An Autonomous Path Planning Method for Unmanned Aerial Vehicle based on A Tangent Intersection and Target Guidance Strategy
Figure 4 for An Autonomous Path Planning Method for Unmanned Aerial Vehicle based on A Tangent Intersection and Target Guidance Strategy

Unmanned aerial vehicle (UAV) path planning enables UAVs to avoid obstacles and reach the target efficiently. To generate high-quality paths without obstacle collision for UAVs, this paper proposes a novel autonomous path planning algorithm based on a tangent intersection and target guidance strategy (APPATT). Guided by a target, the elliptic tangent graph method is used to generate two sub-paths, one of which is selected based on heuristic rules when confronting an obstacle. The UAV flies along the selected sub-path and repeatedly adjusts its flight path to avoid obstacles through this way until the collision-free path extends to the target. Considering the UAV kinematic constraints, the cubic B-spline curve is employed to smooth the waypoints for obtaining a feasible path. Compared with A*, PRM, RRT and VFH, the experimental results show that APPATT can generate the shortest collision-free path within 0.05 seconds for each instance under static environments. Moreover, compared with VFH and RRTRW, APPATT can generate satisfactory collision-free paths under uncertain environments in a nearly real-time manner. It is worth noting that APPATT has the capability of escaping from simple traps within a reasonable time.

Viaarxiv icon