The study of action recognition has attracted considerable attention recently due to its broad applications in multiple areas. However, with the issue of discontinuous training video, which not only decreases the performance of action recognition model, but complicates the data augmentation process as well, still remains under-exploration. In this study, we introduce the 4A (Action Animation-based Augmentation Approach), an innovative pipeline for data augmentation to address the problem. The main contributions remain in our work includes: (1) we investigate the problem of severe decrease on performance of action recognition task training by discontinuous video, and the limitation of existing augmentation methods on solving this problem. (2) we propose a novel augmentation pipeline, 4A, to address the problem of discontinuous video for training, while achieving a smoother and natural-looking action representation than the latest data augmentation methodology. (3) We achieve the same performance with only 10% of the original data for training as with all of the original data from the real-world dataset, and a better performance on In-the-wild videos, by employing our data augmentation techniques.
Current datasets for action recognition tasks face limitations stemming from traditional collection and generation methods, including the constrained range of action classes, absence of multi-viewpoint recordings, limited diversity, poor video quality, and labor-intensive manually collection. To address these challenges, we introduce GTAutoAct, a innovative dataset generation framework leveraging game engine technology to facilitate advancements in action recognition. GTAutoAct excels in automatically creating large-scale, well-annotated datasets with extensive action classes and superior video quality. Our framework's distinctive contributions encompass: (1) it innovatively transforms readily available coordinate-based 3D human motion into rotation-orientated representation with enhanced suitability in multiple viewpoints; (2) it employs dynamic segmentation and interpolation of rotation sequences to create smooth and realistic animations of action; (3) it offers extensively customizable animation scenes; (4) it implements an autonomous video capture and processing pipeline, featuring a randomly navigating camera, with auto-trimming and labeling functionalities. Experimental results underscore the framework's robustness and highlights its potential to significantly improve action recognition model training.
Novel view synthesis of dynamic scenes has been an intriguing yet challenging problem. Despite recent advancements, simultaneously achieving high-resolution photorealistic results, real-time rendering, and compact storage remains a formidable task. To address these challenges, we propose Spacetime Gaussian Feature Splatting as a novel dynamic scene representation, composed of three pivotal components. First, we formulate expressive Spacetime Gaussians by enhancing 3D Gaussians with temporal opacity and parametric motion/rotation. This enables Spacetime Gaussians to capture static, dynamic, as well as transient content within a scene. Second, we introduce splatted feature rendering, which replaces spherical harmonics with neural features. These features facilitate the modeling of view- and time-dependent appearance while maintaining small size. Third, we leverage the guidance of training error and coarse depth to sample new Gaussians in areas that are challenging to converge with existing pipelines. Experiments on several established real-world datasets demonstrate that our method achieves state-of-the-art rendering quality and speed, while retaining compact storage. At 8K resolution, our lite-version model can render at 60 FPS on an Nvidia RTX 4090 GPU.
In a multitude of industrial fields, a key objective entails optimising resource management whilst satisfying user requirements. Resource management by industrial practitioners can result in a passive transfer of user loads across resource providers, a phenomenon whose accurate characterisation is both challenging and crucial. This research reveals the existence of user clusters, which capture macro-level user transfer patterns amid resource variation. We then propose CLUSTER, an interpretable hierarchical Bayesian nonparametric model capable of automating cluster identification, and thereby predicting user transfer in response to resource variation. Furthermore, CLUSTER facilitates uncertainty quantification for further reliable decision-making. Our method enables privacy protection by functioning independently of personally identifiable information. Experiments with simulated and real-world data from the communications industry reveal a pronounced alignment between prediction results and empirical observations across a spectrum of resource management scenarios. This research establishes a solid groundwork for advancing resource management strategy development.
For the aerial manipulator that performs aerial work tasks, the actual operating environment it faces is very complex, and it is affected by internal and external multi-source disturbances. In this paper, to effectively improve the anti-disturbance control performance of the aerial manipulator, an adaptive neural network backstepping control method based on variable inertia parameter modeling is proposed. Firstly, for the intense internal coupling disturbance, we analyze and model it from the perspective of the generation mechanism of the coupling disturbance, and derive the dynamics model of the aerial manipulator system and the coupling disturbance model based on the variable inertia parameters. Through the proposed coupling disturbance model, we can compensate the strong coupling disturbance in a way of feedforward. Then, the adaptive neural network is proposed and applid to estimate and compensate the additional disturbances, and the closed-loop controller is designed based on the backstepping control method. Finally, we verify the correctness of the proposed coupling disturbance model through physical experiment under a large range motion of the manipulator. Two sets of comparative simulation results also prove the accurate estimation of the proposed adaptive neural network for additional disturbances and the effectiveness and superiority of the proposed control method.
The coupling disturbance between the manipulator and the unmanned aerial vehicle (UAV) deteriorates the control performance of system. To get high performance of the aerial manipulator, a robust fractional order fast terminal sliding mode control (FOFTSMC) strategy based on mutable inertia parameters is proposed in this paper. First, the dynamics of aerial manipulator with consideration of the coupling disturbance is derived by utilizing mutable inertia parameters. Then, based on the dynamic model, a robust FOFTSMC algorithm is designed to make the system fly steadily under coupling disturbance. Furthermore, stability analysis is conducted to prove the convergence of tracking errors. Finally, comparative simulation results are given to show the validity and superiority of the proposed scheme.
Monte Carlo rendering algorithms are widely used to produce photorealistic computer graphics images. However, these algorithms need to sample a substantial amount of rays per pixel to enable proper global illumination and thus require an immense amount of computation. In this paper, we present a hybrid rendering method to speed up Monte Carlo rendering algorithms. Our method first generates two versions of a rendering: one at a low resolution with a high sample rate (LRHS) and the other at a high resolution with a low sample rate (HRLS). We then develop a deep convolutional neural network to fuse these two renderings into a high-quality image as if it were rendered at a high resolution with a high sample rate. Specifically, we formulate this fusion task as a super resolution problem that generates a high resolution rendering from a low resolution input (LRHS), assisted with the HRLS rendering. The HRLS rendering provides critical high frequency details which are difficult to recover from the LRHS for any super resolution methods. Our experiments show that our hybrid rendering algorithm is significantly faster than the state-of-the-art Monte Carlo denoising methods while rendering high-quality images when tested on both our own BCR dataset and the Gharbi dataset. \url{https://github.com/hqqxyy/msspl}
Data center (DC) contains both IT devices and facility equipment, and the operation of a DC requires a high-quality monitoring (anomaly detection) system. There are lots of sensors in computer rooms for the DC monitoring system, and they are inherently related. This work proposes a data-driven pipeline (ts2graph) to build a DC graph of things (sensor graph) from the time series measurements of sensors. The sensor graph is an undirected weighted property graph, where sensors are the nodes, sensor features are the node properties, and sensor connections are the edges. The sensor node property is defined by features that characterize the sensor events (behaviors), instead of the original time series. The sensor connection (edge weight) is defined by the probability of concurrent events between two sensors. A graph of things prototype is constructed from the sensor time series of a real data center, and it successfully reveals meaningful relationships between the sensors. To demonstrate the use of the DC sensor graph for anomaly detection, we compare the performance of graph neural network (GNN) and existing standard methods on synthetic anomaly data. GNN outperforms existing algorithms by a factor of 2 to 3 (in terms of precision and F1 score), because it takes into account the topology relationship between DC sensors. We expect that the DC sensor graph can serve as the infrastructure for the DC monitoring system since it represents the sensor relationships.
In Cassini ISS (Imaging Science Subsystem) images, contour detection is often performed on disk-resolved object to accurately locate their center. Thus, the contour detection is a key problem. Traditional edge detection methods, such as Canny and Roberts, often extract the contour with too much interior details and noise. Although the deep convolutional neural network has been applied successfully in many image tasks, such as classification and object detection, it needs more time and computer resources. In the paper, a contour detection algorithm based on H-ELM (Hierarchical Extreme Learning Machine) and DenseCRF (Dense Conditional Random Field) is proposed for Cassini ISS images. The experimental results show that this algorithm's performance is better than both traditional machine learning methods such as SVM, ELM and even deep convolutional neural network. And the extracted contour is closer to the actual contour. Moreover, it can be trained and tested quickly on the general configuration of PC, so can be applied to contour detection for Cassini ISS images.