Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

WARPd: A linearly convergent first-order method for inverse problems with approximate sharpness conditions

Oct 24, 2021
Matthew J. Colbrook

Reconstruction of signals from undersampled and noisy measurements is a topic of considerable interest. Sharpness conditions directly control the recovery performance of restart schemes for first-order methods without the need for restrictive assumptions such as strong convexity. However, they are challenging to apply in the presence of noise or approximate model classes (e.g., approximate sparsity). We provide a first-order method: Weighted, Accelerated and Restarted Primal-dual (WARPd), based on primal-dual iterations and a novel restart-reweight scheme. Under a generic approximate sharpness condition, WARPd achieves stable linear convergence to the desired vector. Many problems of interest fit into this framework. For example, we analyze sparse recovery in compressed sensing, low-rank matrix recovery, matrix completion, TV regularization, minimization of $\|Bx\|_{l^1}$ under constraints ($l^1$-analysis problems for general $B$), and mixed regularization problems. We show how several quantities controlling recovery performance also provide explicit approximate sharpness constants. Numerical experiments show that WARPd compares favorably with specialized state-of-the-art methods and is ideally suited for solving large-scale problems. We also present a noise-blind variant based on the Square-Root LASSO decoder. Finally, we show how to unroll WARPd as neural networks. This approximation theory result provides lower bounds for stable and accurate neural networks for inverse problems and sheds light on architecture choices. Code and a gallery of examples are made available online as a MATLAB package.


  Access Paper or Ask Questions

Signal power and energy-per-bit optimization problems in systems mMTC

Sep 29, 2021
A. A. Burkov

Currently, the issues of the operation of the Internet of Things technology are being actively studied. The operation of a large number of different self-powered sensors is within the framework of a massive machine-type communications scenario using random access methods. Topical issues in this type of communication are: reducing the transmission signal power and increasing the duration of the device by reducing the consumption energy per bit. Formulation and analysis of the tasks of minimizing transmission power and spent energy per bit in systems without retransmissions and with retransmissions to obtain achievability bounds. A model of the system is described, within which four problems of minimizing signal power and energy consumption for given parameters (the number of information bits, the spectral efficiency of the system, and the Packet Delivery Ratio) are formulated and described. The numerical results of solving these optimization problems are presented, which make it possible to obtain the achievability bounds for the considered characteristics in systems with and without losses. The lower bounds obtained by the Shannon formula are presented, assuming that the message length is not limited. The results obtained showed that solving the minimization problem with respect to one of the parameters (signal power or consumption energy per bit) does not minimize the second parameter. This difference is most significant for small information message lengths, which corresponds to IoT scenarios. The results obtained allow assessing the potential for minimizing transmission signal power and consumption energy per bit in random multiple access systems with massive machine-type communications scenarios. The presented problems were solved without taking into account the average delay of message transmission.

* Submitted to Information and Control Systems journal (ISSN 1684-8853 (print); ISSN 2541-8610 (online), DOI: 10.31799, http://www.i-us.ru/index.php/ius/index

  Access Paper or Ask Questions

Variable selection with missing data in both covariates and outcomes: Imputation and machine learning

Apr 06, 2021
Liangyuan Hu, Jung-Yi Joyce Lin, Jiayi Ji

The missing data issue is ubiquitous in health studies. Variable selection in the presence of both missing covariates and outcomes is an important statistical research topic but has been less studied. Existing literature focuses on parametric regression techniques that provide direct parameter estimates of the regression model. In practice, parametric regression models are often sub-optimal for variable selection because they are susceptible to misspecification. Machine learning methods considerably weaken the parametric assumptions and increase modeling flexibility, but do not provide as naturally defined variable importance measure as the covariate effect native to parametric models. We investigate a general variable selection approach when both the covariates and outcomes can be missing at random and have general missing data patterns. This approach exploits the flexibility of machine learning modeling techniques and bootstrap imputation, which is amenable to nonparametric methods in which the covariate effects are not directly available. We conduct expansive simulations investigating the practical operating characteristics of the proposed variable selection approach, when combined with four tree-based machine learning methods, XGBoost, Random Forests, Bayesian Additive Regression Trees (BART) and Conditional Random Forests, and two commonly used parametric methods, lasso and backward stepwise selection. Numeric results show XGBoost and BART have the overall best performance across various settings. Guidance for choosing methods appropriate to the structure of the analysis data at hand are discussed. We further demonstrate the methods via a case study of risk factors for 3-year incidence of metabolic syndrome with data from the Study of Women's Health Across the Nation.

* 25 pages, 14 figures, 4 tables 

  Access Paper or Ask Questions

GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks

Mar 16, 2021
Tianxiang Zhao, Xiang Zhang, Suhang Wang

Node classification is an important research topic in graph learning. Graph neural networks (GNNs) have achieved state-of-the-art performance of node classification. However, existing GNNs address the problem where node samples for different classes are balanced; while for many real-world scenarios, some classes may have much fewer instances than others. Directly training a GNN classifier in this case would under-represent samples from those minority classes and result in sub-optimal performance. Therefore, it is very important to develop GNNs for imbalanced node classification. However, the work on this is rather limited. Hence, we seek to extend previous imbalanced learning techniques for i.i.d data to the imbalanced node classification task to facilitate GNN classifiers. In particular, we choose to adopt synthetic minority over-sampling algorithms, as they are found to be the most effective and stable. This task is non-trivial, as previous synthetic minority over-sampling algorithms fail to provide relation information for newly synthesized samples, which is vital for learning on graphs. Moreover, node attributes are high-dimensional. Directly over-sampling in the original input domain could generates out-of-domain samples, which may impair the accuracy of the classifier. We propose a novel framework, GraphSMOTE, in which an embedding space is constructed to encode the similarity among the nodes. New samples are synthesize in this space to assure genuineness. In addition, an edge generator is trained simultaneously to model the relation information, and provide it for those new samples. This framework is general and can be easily extended into different variations. The proposed framework is evaluated using three different datasets, and it outperforms all baselines with a large margin.

* Accepted by WSDM2021 

  Access Paper or Ask Questions

SPINS: Structure Priors aided Inertial Navigation System

Dec 28, 2020
Yang Lyu, Thien-Minh Nguyen, Liu Liu, Muqing Cao, Shenghai Yuan, Thien Hoang Nguyen, Lihua Xie

Although Simultaneous Localization and Mapping (SLAM) has been an active research topic for decades, current state-of-the-art methods still suffer from instability or inaccuracy due to feature insufficiency or its inherent estimation drift, in many civilian environments. To resolve these issues, we propose a navigation system combing the SLAM and prior-map-based localization. Specifically, we consider additional integration of line and plane features, which are ubiquitous and more structurally salient in civilian environments, into the SLAM to ensure feature sufficiency and localization robustness. More importantly, we incorporate general prior map information into the SLAM to restrain its drift and improve the accuracy. To avoid rigorous association between prior information and local observations, we parameterize the prior knowledge as low dimensional structural priors defined as relative distances/angles between different geometric primitives. The localization is formulated as a graph-based optimization problem that contains sliding-window-based variables and factors, including IMU, heterogeneous features, and structure priors. We also derive the analytical expressions of Jacobians of different factors to avoid the automatic differentiation overhead. To further alleviate the computation burden of incorporating structural prior factors, a selection mechanism is adopted based on the so-called information gain to incorporate only the most effective structure priors in the graph optimization. Finally, the proposed framework is extensively tested on synthetic data, public datasets, and, more importantly, on the real UAV flight data obtained from a building inspection task. The results show that the proposed scheme can effectively improve the accuracy and robustness of localization for autonomous robots in civilian applications.

* 14 pages, 14 figures 

  Access Paper or Ask Questions

Comprehensive Graph-conditional Similarity Preserving Network for Unsupervised Cross-modal Hashing

Dec 25, 2020
Jun Yu, Hao Zhou, Yibing Zhan, Dacheng Tao

Unsupervised cross-modal hashing (UCMH) has become a hot topic recently. Current UCMH focuses on exploring data similarities. However, current UCMH methods calculate the similarity between two data, mainly relying on the two data's cross-modal features. These methods suffer from inaccurate similarity problems that result in a suboptimal retrieval Hamming space, because the cross-modal features between the data are not sufficient to describe the complex data relationships, such as situations where two data have different feature representations but share the inherent concepts. In this paper, we devise a deep graph-neighbor coherence preserving network (DGCPN). Specifically, DGCPN stems from graph models and explores graph-neighbor coherence by consolidating the information between data and their neighbors. DGCPN regulates comprehensive similarity preserving losses by exploiting three types of data similarities (i.e., the graph-neighbor coherence, the coexistent similarity, and the intra- and inter-modality consistency) and designs a half-real and half-binary optimization strategy to reduce the quantization errors during hashing. Essentially, DGCPN addresses the inaccurate similarity problem by exploring and exploiting the data's intrinsic relationships in a graph. We conduct extensive experiments on three public UCMH datasets. The experimental results demonstrate the superiority of DGCPN, e.g., by improving the mean average precision from 0.722 to 0.751 on MIRFlickr-25K using 64-bit hashing codes to retrieve texts from images. We will release the source code package and the trained model on https://github.com/Atmegal/DGCPN.


  Access Paper or Ask Questions

Machine Learning for Detecting Data Exfiltration

Dec 17, 2020
Bushra Sabir, Faheem Ullah, M. Ali Babar, Raj Gaire

Context: Research at the intersection of cybersecurity, Machine Learning (ML), and Software Engineering (SE) has recently taken significant steps in proposing countermeasures for detecting sophisticated data exfiltration attacks. It is important to systematically review and synthesize the ML-based data exfiltration countermeasures for building a body of knowledge on this important topic. Objective: This paper aims at systematically reviewing ML-based data exfiltration countermeasures to identify and classify ML approaches, feature engineering techniques, evaluation datasets, and performance metrics used for these countermeasures. This review also aims at identifying gaps in research on ML-based data exfiltration countermeasures. Method: We used a Systematic Literature Review (SLR) method to select and review {92} papers. Results: The review has enabled us to (a) classify the ML approaches used in the countermeasures into data-driven, and behaviour-driven approaches, (b) categorize features into six types: behavioural, content-based, statistical, syntactical, spatial and temporal, (c) classify the evaluation datasets into simulated, synthesized, and real datasets and (d) identify 11 performance measures used by these studies. Conclusion: We conclude that: (i) the integration of data-driven and behaviour-driven approaches should be explored; (ii) There is a need of developing high quality and large size evaluation datasets; (iii) Incremental ML model training should be incorporated in countermeasures; (iv) resilience to adversarial learning should be considered and explored during the development of countermeasures to avoid poisoning attacks; and (v) the use of automated feature engineering should be encouraged for efficiently detecting data exfiltration attacks.


  Access Paper or Ask Questions

High-Resolution Air Quality Prediction Using Low-Cost Sensors

Jun 22, 2020
Thibaut Cassard, Grégoire Jauvion, David Lissmyr

The use of low-cost sensors in air quality monitoring networks is still a much-debated topic among practitioners: they are much cheaper than traditional air quality monitoring stations set up by public authorities (a few hundred dollars compared to a few dozens of thousand dollars) at the cost of a lower accuracy and robustness. This paper presents a case study of using low-cost sensors measurements in an air quality prediction engine. The engine predicts jointly PM2.5 and PM10 concentrations in the United States at a very high resolution in the range of a few dozens of meters. It is fed with the measurements provided by official air quality monitoring stations, the measurements provided by a network of more than 4000 low-cost sensors across the country, and traffic estimates. We show that the use of low-cost sensors' measurements improves the engine's accuracy very significantly. In particular, we derive a strong link between the density of low-cost sensors and the predictions' accuracy: the more low-cost sensors are in an area, the more accurate are the predictions. As an illustration, in areas with the highest density of low-cost sensors, the low-cost sensors' measurements bring a 25% and 15% improvement in PM2.5 and PM10 predictions' accuracy respectively. An other strong conclusion is that in some areas with a high density of low-cost sensors, the engine performs better when fed with low-cost sensors' measurements only than when fed with official monitoring stations' measurements only: this suggests that an air quality monitoring network composed of low-cost sensors is effective in monitoring air quality. This is a very important result, as such a monitoring network is much cheaper to set up.

* 7 pages, 6 figures. arXiv admin note: substantial text overlap with arXiv:2002.10394 

  Access Paper or Ask Questions

TEALS: Time-aware Text Embedding Approach to Leverage Subgraphs

Jul 06, 2019
Saeid Hosseini, Saeed Najafi Pour, Ngai-Man Cheung, Mohammad Reza Kangavari, Xiaofang Zhou, Yuval Elovici

Given a graph over which the contagions (e.g. virus, gossip) propagate, leveraging subgraphs with highly correlated nodes is beneficial to many applications. Yet, challenges abound. First, the propagation pattern between a pair of nodes may change in various temporal dimensions. Second, not always the same contagion is propagated. Hence, state-of-the-art text mining approaches ranging from similarity measures to topic-modeling cannot use the textual contents to compute the weights between the nodes. Third, the word-word co-occurrence patterns may differ in various temporal dimensions, which increases the difficulty to employ current word embedding approaches. We argue that inseparable multi-aspect temporal collaborations are inevitably needed to better calculate the correlation metrics in dynamical processes. In this work, we showcase a sophisticated framework that on the one hand, integrates a neural network based time-aware word embedding component that can collectively construct the word vectors through an assembly of infinite latent temporal facets, and on the other hand, uses an elaborate generative model to compute the edge weights through heterogeneous temporal attributes. After computing the intra-nodes weights, we utilize our Max-Heap Graph cutting algorithm to exploit subgraphs. We then validate our model through comprehensive experiments on real-world propagation data. The results show that the knowledge gained from the versatile temporal dynamics is not only indispensable for word embedding approaches but also plays a significant role in the understanding of the propagation behaviors. Finally, we demonstrate that compared with other rivals, our model can dominantly exploit the subgraphs with highly coordinated nodes.


  Access Paper or Ask Questions

Transport Analysis of Infinitely Deep Neural Network

Oct 31, 2018
Sho Sonoda, Noboru Murata

We investigated the feature map inside deep neural networks (DNNs) by tracking the transport map. We are interested in the role of depth (why do DNNs perform better than shallow models?) and the interpretation of DNNs (what do intermediate layers do?) Despite the rapid development in their application, DNNs remain analytically unexplained because the hidden layers are nested and the parameters are not faithful. Inspired by the integral representation of shallow NNs, which is the continuum limit of the width, or the hidden unit number, we developed the flow representation and transport analysis of DNNs. The flow representation is the continuum limit of the depth or the hidden layer number, and it is specified by an ordinary differential equation with a vector field. We interpret an ordinary DNN as a transport map or a Euler broken line approximation of the flow. Technically speaking, a dynamical system is a natural model for the nested feature maps. In addition, it opens a new way to the coordinate-free treatment of DNNs by avoiding the redundant parametrization of DNNs. Following Wasserstein geometry, we analyze a flow in three aspects: dynamical system, continuity equation, and Wasserstein gradient flow. A key finding is that we specified a series of transport maps of the denoising autoencoder (DAE). Starting from the shallow DAE, this paper develops three topics: the transport map of the deep DAE, the equivalence between the stacked DAE and the composition of DAEs, and the development of the double continuum limit or the integral representation of the flow representation. As partial answers to the research questions, we found that deeper DAEs converge faster and the extracted features are better; in addition, a deep Gaussian DAE transports mass to decrease the Shannon entropy of the data distribution.


  Access Paper or Ask Questions

<<
592
593
594
595
596
597
598
599
600
601
602
603
604
>>