A fault diagnosis method for power electronics converters based on deep feedforward network and wavelet compression is proposed in this paper. The transient historical data after wavelet compression are used to realize the training of fault diagnosis classifier. Firstly, the correlation analysis of the voltage or current data running in various fault states is performed to remove the redundant features and the sampling point. Secondly, the wavelet transform is used to remove the redundant data of the features, and then the training sample data is greatly compressed. The deep feedforward network is trained by the low frequency component of the features, while the training speed is greatly accelerated. The average accuracy of fault diagnosis classifier can reach over 97%. Finally, the fault diagnosis classifier is tested, and final diagnosis result is determined by multiple-groups transient data, by which the reliability of diagnosis results is improved. The experimental result proves that the classifier has strong generalization ability and can accurately locate the open-circuit faults in IGBTs.
Power electronics converters have been widely used in aerospace system, DC transmission, distributed energy, smart grid and so forth, and the reliability of power electronics converters has been a hotspot in academia and industry. It is of great significance to carry out power electronics converters open-circuit faults monitoring and intelligent fault diagnosis to avoid secondary faults, reduce time and cost of operation and maintenance, and improve the reliability of power electronics system. Firstly, the faults features of power electronic converters are analyzed and summarized. Secondly, some AI-based fault diagnosis methods and application examples in power electronics converters are reviewed, and a fault diagnosis method based on the combination of random forests and transient fault features is proposed for three-phase power electronics converters. Finally, the future research challenges and directions of AI-based fault diagnosis methods are pointed out.
Deep Q-learning Network (DQN) is a successful way which combines reinforcement learning with deep neural networks and leads to a widespread application of reinforcement learning. One challenging problem when applying DQN or other reinforcement learning algorithms to real world problem is data collection. Therefore, how to improve data efficiency is one of the most important problems in the research of reinforcement learning. In this paper, we propose a framework which uses the Max-Mean loss in Deep Q-Network (M$^2$DQN). Instead of sampling one batch of experiences in the training step, we sample several batches from the experience replay and update the parameters such that the maximum TD-error of these batches is minimized. The proposed method can be combined with most of existing techniques of DQN algorithm by replacing the loss function. We verify the effectiveness of this framework with one of the most widely used techniques, Double DQN (DDQN), in several gym games. The results show that our method leads to a substantial improvement in both the learning speed and performance.
Stochastic compositional optimization (SCO) has attracted considerable attention because of its broad applicability to important real-world problems. However, existing works on SCO assume that the projection within a solution update is simple, which fails to hold for problem instances where the constraints are in the form of expectations, such as empirical conditional value-at-risk constraints. We study a novel model that incorporates single-level expected value and two-level compositional constraints into the current SCO framework. Our model can be applied widely to data-driven optimization and risk management, including risk-averse optimization and high-moment portfolio selection, and can handle multiple constraints. We further propose a class of primal-dual algorithms that generates sequences converging to the optimal solution at the rate of $\cO(\frac{1}{\sqrt{N}})$under both single-level expected value and two-level compositional constraints, where $N$ is the iteration counter, establishing the benchmarks in expected value constrained SCO.
Biases have marked medical history, leading to unequal care affecting marginalised groups. The patterns of missingness in observational data often reflect these group discrepancies, but the algorithmic fairness implications of group-specific missingness are not well understood. Despite its potential impact, imputation is too often a forgotten preprocessing step. At best, practitioners guide imputation choice by optimising overall performance, ignoring how this preprocessing can reinforce inequities. Our work questions this choice by studying how imputation affects downstream algorithmic fairness. First, we provide a structured view of the relationship between clinical presence mechanisms and group-specific missingness patterns. Then, through simulations and real-world experiments, we demonstrate that the imputation choice influences marginalised group performance and that no imputation strategy consistently reduces disparities. Importantly, our results show that current practices may endanger health equity as similarly performing imputation strategies at the population level can affect marginalised groups in different ways. Finally, we propose recommendations for mitigating inequity stemming from a neglected step of the machine learning pipeline.
Generating melody from lyrics is an interesting yet challenging task in the area of artificial intelligence and music. However, the difficulty of keeping the consistency between input lyrics and generated melody limits the generation quality of previous works. In our proposal, we demonstrate our proposed interpretable lyrics-to-melody generation system which can interact with users to understand the generation process and recreate the desired songs. To improve the reliability of melody generation that matches lyrics, mutual information is exploited to strengthen the consistency between lyrics and generated melodies. Gumbel-Softmax is exploited to solve the non-differentiability problem of generating discrete music attributes by Generative Adversarial Networks (GANs). Moreover, the predicted probabilities output by the generator is utilized to recommend music attributes. Interacting with our lyrics-to-melody generation system, users can listen to the generated AI song as well as recreate a new song by selecting from recommended music attributes.
With the gradual popularization of self-driving, it is becoming increasingly important for vehicles to smartly make the right driving decisions and autonomously obey traffic rules by correctly recognizing traffic signs. However, for machine learning-based traffic sign recognition on the Internet of Vehicles (IoV), a large amount of traffic sign data from distributed vehicles is needed to be gathered in a centralized server for model training, which brings serious privacy leakage risk because of traffic sign data containing lots of location privacy information. To address this issue, we first exploit privacy-preserving federated learning to perform collaborative training for accurate recognition models without sharing raw traffic sign data. Nevertheless, due to the limited computing and energy resources of most devices, it is hard for vehicles to continuously undertake complex artificial intelligence tasks. Therefore, we introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training, which is the next generation of neural networks and is practical and well-fitted to IoV scenarios. Furthermore, we design a novel encoding scheme for SNNs based on neuron receptive fields to extract information from the pixel and spatial dimensions of traffic signs to achieve high-accuracy training. Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
Pinyin to Character conversion (P2C) task is the key task of Input Method Engine (IME) in commercial input software for Asian languages, such as Chinese, Japanese, Thai language and so on. It's usually treated as sequence labelling task and resolved by language model, i.e. n-gram or RNN. However, the low capacity of the n-gram or RNN limits its performance. This paper introduces a new solution named PERT which stands for bidirectional Pinyin Encoder Representations from Transformers. It achieves significant improvement of performance over baselines. Furthermore, we combine PERT with n-gram under a Markov framework, and improve performance further. Lastly, the external lexicon is incorporated into PERT so as to resolve the OOD issue of IME.
Synthetic aperture radar (SAR) tomography (TomoSAR) has attracted remarkable interest for its ability in achieving three-dimensional reconstruction along the elevation direction from multiple observations. In recent years, compressed sensing (CS) technique has been introduced into TomoSAR considering for its super-resolution ability with limited samples. Whereas, the CS-based methods suffer from several drawbacks, including weak noise resistance, high computational complexity and complex parameter fine-tuning. Among the different CS algorithms, iterative soft-thresholding algorithm (ISTA) is widely used as a robust reconstruction approach, however, the parameters in the ISTA algorithm are manually chosen, which usually requires a time-consuming fine-tuning process to achieve the best performance. Aiming at efficient TomoSAR imaging, a novel sparse unfolding network named analytic learned ISTA (ALISTA) is proposed towards the TomoSAR imaging problem in this paper, and the key parameters of ISTA are learned from training data via deep learning to avoid complex parameter fine-tuning and significantly relieves the training burden. In addition, experiments verify that it is feasible to use traditional CS algorithms as training labels, which provides a tangible supervised training method to achieve better 3D reconstruction performance even in the absence of labeled data in real applications.
Synthetic aperture radar (SAR) tomography (TomoSAR) enables the reconstruction and three-dimensional (3D) localization of targets based on multiple two-dimensional (2D) observations of the same scene. The resolving along the elevation direction can be treated as a line spectrum estimation problem. However, traditional super-resolution spectrum estimation algorithms require multiple snapshots and uncorrelated targets. Meanwhile, as the most popular TomoSAR imaging method in modern years, compressed sensing (CS) based methods suffer from the gridding mismatch effect which markedly degrades the imaging performance. As a gridless CS approach, atomic norm minimization can avoid the gridding effect but requires enormous computing resources. Addressing the above issues, this paper proposes an improved fast ANM algorithm to TomoSAR elevation focusing by introducing the IVDST-ANM algorithm, which reduces the huge computational complexity of the conventional time-consuming semi-positive definite programming (SDP) by the iterative Vandermonde decomposition and shrinkage-thresholding (IVDST) approach, and retains the benefits of ANM in terms of gridless imaging and single snapshot recovery. We conducted experiments using simulated data to evaluate the performance of the proposed method, and reconstruction results of an urban area from the SARMV3D-Imaging 1.0 dataset are also presented.