RecBole has recently attracted increasing attention from the research community. As the increase of the number of users, we have received a number of suggestions and update requests. This motivates us to make some significant improvements on our library, so as to meet the user requirements and contribute to the research community. In order to show the recent update in RecBole, we write this technical report to introduce our latest improvements on RecBole. In general, we focus on the flexibility and efficiency of RecBole in the past few months. More specifically, we have four development targets: (1) more flexible data processing, (2) more efficient model training, (3) more reproducible configurations, and (4) more comprehensive user documentation. Readers can download the above updates at: https://github.com/RUCAIBox/RecBole.
To improve the accuracy of direction-of-arrival (DOA) estimation, a deep learning (DL)-based method called CDAE-DNN is proposed for hybrid analog and digital (HAD) massive MIMO receive array with overlapped subarray (OSA) architecture in this paper. In the proposed method, the sample covariance matrix (SCM) is first input to a convolution denoise autoencoder (CDAE) to remove the approximation error, then the output of CDAE is imported to a fully-connected (FC) network to get the estimation result. Based on the simulation results, the proposed CDAE-DNN has great performance advantages over traditional MUSIC algorithm and CNN-based method, especially in the situations with low signal to noise ratio (SNR) and low snapshot numbers. And the OSA architecture has also been shown to significantly improve the estimation accuracy compared to non-overlapped subarray (NOSA) architecture. In addition, the Cramer-Rao lower bound (CRLB) for the HAD-OSA architecture is presented.
Passive geolocation by multiple unmanned aerial vehicles (UAVs) covers a wide range of military and civilian applications including rescue, wild life tracking and electronic warfare. The sensor-target geometry is known to significantly affect the localization precision. The existing sensor placement strategies mainly work on the cases without any constraints on the sensors locations. However, UAVs cannot fly/hover simply in arbitrary region due to realistic constraints, such as the geographical limitations, the security issues, and the max flying speed. In this paper, optimal geometrical configurations of UAVs in received signal strength (RSS)-based localization under region constraints are investigated. Employing the D-optimal criteria, i.e., minimizing the determinate of Fisher information matrix (FIM), such optimal problem is formulated. Based on the rigorous algebra and geometrical derivations, optimal and also closed form configurations of UAVs under different flying states are proposed. Finally, the effectiveness and practicality of the proposed configurations are demonstrated by simulation examples.
Affective behaviour analysis has aroused researchers' attention due to its broad applications. However, it is labor exhaustive to obtain accurate annotations for massive face images. Thus, we propose to utilize the prior facial information via Masked Auto-Encoder (MAE) pretrained on unlabeled face images. Furthermore, we combine MAE pretrained Vision Transformer (ViT) and AffectNet pretrained CNN to perform multi-task emotion recognition. We notice that expression and action unit (AU) scores are pure and intact features for valence-arousal (VA) regression. As a result, we utilize AffectNet pretrained CNN to extract expression scores concatenating with expression and AU scores from ViT to obtain the final VA features. Moreover, we also propose a co-training framework with two parallel MAE pretrained ViT for expression recognition tasks. In order to make the two views independent, we random mask most patches during the training process. Then, JS divergence is performed to make the predictions of the two views as consistent as possible. The results on ABAW4 show that our methods are effective.
Cache plays an important role to maintain high and stable performance (i.e. high throughput, low tail latency and throughput jitter) in storage systems. Existing rule-based cache management methods, coupled with engineers' manual configurations, cannot meet ever-growing requirements of both time-varying workloads and complex storage systems, leading to frequent cache overloading. In this paper, we for the first time propose a light-weight learning-based cache bandwidth control technique, called \LQoCo which can adaptively control the cache bandwidth so as to effectively prevent cache overloading in storage systems. Extensive experiments with various workloads on real systems show that LQoCo, with its strong adaptability and fast learning ability, can adapt to various workloads to effectively control cache bandwidth, thereby significantly improving the storage performance (e.g. increasing the throughput by 10\%-20\% and reducing the throughput jitter and tail latency by 2X-6X and 1.5X-4X, respectively, compared with two representative rule-based methods).
To improve the efficiency and accuracy of direction finding with massive MIMO receive array, it is necessary to determine the specific number of signal emitters in advance. In this paper, we present a complete DOA preprocessing system for inferring the number of passive emitters. Firstly, in order to improve the accuracy of detecting the number of signals, two high-precision signal detectors, square root of maximum eigenvalue times minimum eigenvalue (SR-MME) and geometric mean (GM), are proposed. Compared to other detectors, SR-MME and GM can achieve a high detection probability while maintaining extremely low false alarm probability. Secondly, if the existence of emitters is determined by detectors, we need to further confirm their number, that is a problem of pattern classification. Therefore, we perform feature extraction on the the eigenvalue sequence of sample covariance matrix to construct feature vector and innovatively propose a multi-layer neural network (ML-NN). Additionally, the support vector machine (SVM), and naive Bayesian classifier (NBC) are also designed. The simulation results show that the machine learning-based methods can achieve good results in signal classification, especially neural networks, which can always maintain the classification accuracy above 70\% with massive MIMO receive array. Finally, we analyze the classical signal classification methods, Akaike (AIC) and Minimum description length (MDL). It is concluded that the two methods are not suitable for scenarios with massive receive arrays, and they also have much worse performance than machine learning-based classifiers.
Numerous COVID-19 clinical decision support systems have been developed. However many of these systems do not have the merit for validity due to methodological shortcomings including algorithmic bias. Methods Logistic regression models were created to predict COVID-19 mortality, ventilator status and inpatient status using a real-world dataset consisting of four hospitals in New York City and analyzed for biases against race, gender and age. Simple thresholding adjustments were applied in the training process to establish more equitable models. Results Compared to the naively trained models, the calibrated models showed a 57% decrease in the number of biased trials, while predictive performance, measured by area under the receiver/operating curve (AUC), remained unchanged. After calibration, the average sensitivity of the predictive models increased from 0.527 to 0.955. Conclusion We demonstrate that naively training and deploying machine learning models on real world data for predictive analytics of COVID-19 has a high risk of bias. Simple implemented adjustments or calibrations during model training can lead to substantial and sustained gains in fairness on subsequent deployment.
Glioblastoma is profoundly heterogeneous in regional microstructure and vasculature. Characterizing the spatial heterogeneity of glioblastoma could lead to more precise treatment. With unsupervised learning techniques, glioblastoma MRI-derived radiomic features have been widely utilized for tumor sub-region segmentation and survival prediction. However, the reliability of algorithm outcomes is often challenged by both ambiguous intermediate process and instability introduced by the randomness of clustering algorithms, especially for data from heterogeneous patients. In this paper, we propose an adaptive unsupervised learning approach for efficient MRI intra-tumor partitioning and glioblastoma survival prediction. A novel and problem-specific Feature-enhanced Auto-Encoder (FAE) is developed to enhance the representation of pairwise clinical modalities and therefore improve clustering stability of unsupervised learning algorithms such as K-means. Moreover, the entire process is modelled by the Bayesian optimization (BO) technique with a custom loss function that the hyper-parameters can be adaptively optimized in a reasonably few steps. The results demonstrate that the proposed approach can produce robust and clinically relevant MRI sub-regions and statistically significant survival predictions.
For a passive direction of arrival (DoA) measurement system using massive multiple input multiple output (MIMO), it is mandatory to infer whether the emitter exists or not before performing DOA estimation operation. Inspired by the detection idea from radio detection and ranging (radar), three high-performance detectors are proposed to infer the existence of single passive emitter from the eigen-space of sample covariance matrix of receive signal vector. The test statistic (TS) of the first method is defined as the ratio of maximum eigen-value (Max-EV) to minimum eigen-value (R-MaxEV-MinEV) while that of the second one is defined as the ratio of Max-EV to noise variance (R-MaxEV-NV). The TS of the third method is the mean of maximum eigen-value (EV) and minimum EV(M-MaxEV-MinEV). Their closed-form expressions are presented and the corresponding detection performance is given. Simulation results show that the proposed M-MaxEV-MinEV and R-MaxEV-NV methods can approximately achieve the same detection performance that is better than the traditional generalized likelihood ratio test method with false alarm probability being less than 0.3.
Glioblastoma is profoundly heterogeneous in microstructure and vasculature, which may lead to tumor regional diversity and distinct treatment response. Although successful in tumor sub-region segmentation and survival prediction, radiomics based on machine learning algorithms, is challenged by its robustness, due to the vague intermediate process and track changes. Also, the weak interpretability of the model poses challenges to clinical application. Here we proposed a machine learning framework to semi-automatically fine-tune the clustering algorithms and quantitatively identify stable sub-regions for reliable clinical survival prediction. Hyper-parameters are automatically determined by the global minimum of the trained Gaussian Process (GP) surrogate model through Bayesian optimization(BO) to alleviate the difficulty of tuning parameters for clinical researchers. To enhance the interpretability of the survival prediction model, we incorporated the prior knowledge of intra-tumoral heterogeneity, by segmenting tumor sub-regions and extracting sub-regional features. The results demonstrated that the global minimum of the trained GP surrogate can be used as sub-optimal hyper-parameter solutions for efficient. The sub-regions segmented based on physiological MRI can be applied to predict patient survival, which could enhance the clinical interpretability for the machine learning model.