Recently, deep cross-modal hashing has gained increasing attention. However, in many practical cases, data are distributed and cannot be collected due to privacy concerns, which greatly reduces the cross-modal hashing performance on each client. And due to the problems of statistical heterogeneity, model heterogeneity, and forcing each client to accept the same parameters, applying federated learning to cross-modal hash learning becomes very tricky. In this paper, we propose a novel method called prototype-based layered federated cross-modal hashing. Specifically, the prototype is introduced to learn the similarity between instances and classes on server, reducing the impact of statistical heterogeneity (non-IID) on different clients. And we monitor the distance between local and global prototypes to further improve the performance. To realize personalized federated learning, a hypernetwork is deployed on server to dynamically update different layers' weights of local model. Experimental results on benchmark datasets show that our method outperforms state-of-the-art methods.
Industrial robots play a vital role in automatic production, which have been widely utilized in industrial production activities, like handling and welding. However, due to an uncalibrated robot with machining tolerance and assembly tolerance, it suffers from low absolute positioning accuracy, which cannot satisfy the requirements of high-precision manufacture. To address this hot issue, we propose a novel calibration method based on an unscented Kalman filter and variable step-size Levenberg-Marquardt algorithm. This work has three ideas: a) proposing a novel variable step-size Levenberg-Marquardt algorithm to addresses the issue of local optimum in a Levenberg-Marquardt algorithm; b) employing an unscented Kalman filter to reduce the influence of the measurement noises; and c) developing a novel calibration method incorporating an unscented Kalman filter with a variable step-size Levenberg-Marquardt algorithm. Furthermore, we conduct enough experiments on an ABB IRB 120 industrial robot. From the experimental results, the proposed method achieves much higher calibration accuracy than some state-of-the-art calibration methods. Hence, this work is an important milestone in the field of robot calibration.
Optimal well placement and well injection-production are crucial for the reservoir development to maximize the financial profits during the project lifetime. Meta-heuristic algorithms have showed good performance in solving complex, nonlinear and non-continuous optimization problems. However, a large number of numerical simulation runs are involved during the optimization process. In this work, a novel and efficient data-driven evolutionary algorithm, called generalized data-driven differential evolutionary algorithm (GDDE), is proposed to reduce the number of simulation runs on well-placement and control optimization problems. Probabilistic neural network (PNN) is adopted as the classifier to select informative and promising candidates, and the most uncertain candidate based on Euclidean distance is prescreened and evaluated with a numerical simulator. Subsequently, local surrogate model is built by radial basis function (RBF) and the optimum of the surrogate, found by optimizer, is evaluated by the numerical simulator to accelerate the convergence. It is worth noting that the shape factors of RBF model and PNN are optimized via solving hyper-parameter sub-expensive optimization problem. The results show the optimization algorithm proposed in this study is very promising for a well-placement optimization problem of two-dimensional reservoir and joint optimization of Egg model.
A high-dimensional and incomplete (HDI) matrix frequently appears in various big-data-related applications, which demonstrates the inherently non-negative interactions among numerous nodes. A non-negative latent factor (NLF) model performs efficient representation learning to an HDI matrix, whose learning process mostly relies on a single latent factor-dependent, non-negative and multiplicative update (SLF-NMU) algorithm. However, an SLF-NMU algorithm updates a latent factor based on the current update increment only without appropriate considerations of past learning information, resulting in slow convergence. Inspired by the prominent success of a proportional-integral (PI) controller in various applications, this paper proposes a Proportional-Integral-incorporated Non-negative Latent Factor (PI-NLF) model with two-fold ideas: a) establishing an Increment Refinement (IR) mechanism via considering the past update increments following the principle of a PI controller; and b) designing an IR-based SLF-NMU (ISN) algorithm to accelerate the convergence rate of a resultant model. Empirical studies on four HDI datasets demonstrate that a PI-NLF model outperforms the state-of-the-art models in both computational efficiency and estimation accuracy for missing data of an HDI matrix. Hence, this study unveils the feasibility of boosting the performance of a non-negative learning algorithm through an error feedback controller.
High-dimensional and sparse (HiDS) matrices are omnipresent in a variety of big data-related applications. Latent factor analysis (LFA) is a typical representation learning method that extracts useful yet latent knowledge from HiDS matrices via low-rank approximation. Current LFA-based models mainly focus on a single-metric representation, where the representation strategy designed for the approximation Loss function, is fixed and exclusive. However, real-world HiDS matrices are commonly heterogeneous and inclusive and have diverse underlying patterns, such that a single-metric representation is most likely to yield inferior performance. Motivated by this, we in this paper propose a multi-metric latent factor (MMLF) model. Its main idea is two-fold: 1) two vector spaces and three Lp-norms are simultaneously employed to develop six variants of LFA model, each of which resides in a unique metric representation space, and 2) all the variants are ensembled with a tailored, self-adaptive weighting strategy. As such, our proposed MMLF enjoys the merits originated from a set of disparate metric spaces all at once, achieving the comprehensive and unbiased representation of HiDS matrices. Theoretical study guarantees that MMLF attains a performance gain. Extensive experiments on eight real-world HiDS datasets, spanning a wide range of industrial and science domains, verify that our MMLF significantly outperforms ten state-of-the-art, shallow and deep counterparts.
A High-dimensional and sparse (HiDS) matrix is frequently encountered in a big data-related application like an e-commerce system or a social network services system. To perform highly accurate representation learning on it is of great significance owing to the great desire of extracting latent knowledge and patterns from it. Latent factor analysis (LFA), which represents an HiDS matrix by learning the low-rank embeddings based on its observed entries only, is one of the most effective and efficient approaches to this issue. However, most existing LFA-based models perform such embeddings on a HiDS matrix directly without exploiting its hidden graph structures, thereby resulting in accuracy loss. To address this issue, this paper proposes a graph-incorporated latent factor analysis (GLFA) model. It adopts two-fold ideas: 1) a graph is constructed for identifying the hidden high-order interaction (HOI) among nodes described by an HiDS matrix, and 2) a recurrent LFA structure is carefully designed with the incorporation of HOI, thereby improving the representa-tion learning ability of a resultant model. Experimental results on three real-world datasets demonstrate that GLFA outperforms six state-of-the-art models in predicting the missing data of an HiDS matrix, which evidently supports its strong representation learning ability to HiDS data.
Over the past decades, industrial manipulators play a vital role in in various fields, like aircraft manufacturing and automobile manufacturing. However, an industrial manipulator without calibration suffers from its low absolute positioning accuracy, which extensively restricts its application in high-precision intelligent manufacture. Recent manipulator calibration methods are developed to address this issue, while they frequently encounter long-tail convergence and low calibration accuracy. To address this thorny issue, this work proposes a novel manipulator calibration method incorporating an extended Kalman filter with a Quadratic Interpolated Beetle Antennae Search algorithm. This paper has three-fold ideas: a) proposing a new Quadratic Interpolated Beetle Antennae Search algorithm to deal with the issue of local optimum and low convergence rate in a Beetle Antennae Search algorithm; b) adopting an extended Kalman filter algorithm to suppress non-Gaussian noises and c) developing a new manipulator calibration method incorporating an extended Kalman filter with a Quadratic Interpolated Beetle Antennae Search algorithm to calibrating a manipulator. Extensively experimental results on an ABB IRB120 industrial manipulator demonstrate that the proposed method achieves much higher calibration accuracy than several state-of-the-art calibration methods.
Recently, industrial robots plays a significant role in intelligent manufacturing. Hence, it is an urgent issue to ensure the robot with the high positioning precision. To address this hot issue, a novel calibration method based on an powerful ensemble with various algorithms is proposed. This paper has two ideas: a) developing eight calibration methods to identify the kinematic parameter errors; 2) establishing an effective ensemble to search calibrated kinematic parameters. Enough experimental results show that this ensemble can achieve: 1) higher calibration accuracy for the robot; 2) model diversity; 3) strong generalization ability.
Industrial robot arms are extensively important for intelligent manufacturing. An industrial robot arm commonly enjoys its high repetitive positioning accuracy while suffering from its low absolute positioning accuracy, which greatly restricts its application in high-precision manufacture, like automobile manufacture. Aiming at addressing this hot issue, this work proposes a novel robot arm calibration method based on cubic interpolated beetle antennae search (CIBAS). This study has three ideas: a) developing a novel CIBAS algorithm, which can effectively addresses the issue of local optimum in a Beetle Antennae Search algorithm; b) utilizing a particle filter to reduce the influence of non-Gaussian noises; and c) proposing a new calibration method incorporating CIBAS algorithm and particle filter to searching the optimal kinematic parameters. Experimental results on an ABB IRB120 industrial robot arm demonstrate that the proposed method achieves much higher calibration accuracy than several state-of-the-art calibration methods.
The Zero-Shot Sketch-based Image Retrieval (ZS-SBIR) is a challenging task because of the large domain gap between sketches and natural images as well as the semantic inconsistency between seen and unseen categories. Previous literature bridges seen and unseen categories by semantic embedding, which requires prior knowledge of the exact class names and additional extraction efforts. And most works reduce domain gap by mapping sketches and natural images into a common high-level space using constructed sketch-image pairs, which ignore the unpaired information between images and sketches. To address these issues, in this paper, we propose a novel Three-Stream Joint Training Network (3JOIN) for the ZS-SBIR task. To narrow the domain differences between sketches and images, we extract edge maps for natural images and treat them as a bridge between images and sketches, which have similar content to images and similar style to sketches. For exploiting a sufficient combination of sketches, natural images, and edge maps, a novel three-stream joint training network is proposed. In addition, we use a teacher network to extract the implicit semantics of the samples without the aid of other semantics and transfer the learned knowledge to unseen classes. Extensive experiments conducted on two real-world datasets demonstrate the superiority of our proposed method.