Get our free extension to see links to code for papers anywhere online!Free add-on: code for papers everywhere!Free add-on: See code for papers anywhere!

Figures and Tables:

Abstract:Noise radars have the same mathematical description as a type of quantum radar known as quantum two-mode squeezing radar. Although their physical implementations are very different, this mathematical similarity allows us to analyze them collectively. We may consider the two types of radars as forming a single class of radars, called noise-type radars. The target detection performance of noise-type radars depends on two parameters: the number of integrated samples and a correlation coefficient. In this paper, we show that when the number of integrated samples is large and the correlation coefficient is low, the detection performance becomes a function of a single parameter: the number of integrated samples multiplied by the square of the correlation coefficient. We then explore the detection performance of noise-type radars in terms of this emergent parameter; in particular, we determine the probability of detection as a function of this parameter.

Via

Figures and Tables:

Abstract:Standard noise radars, as well as noise-type radars such as quantum two-mode squeezing radar, are characterized by a covariance matrix with a very specific structure. This matrix has four independent parameters: the amplitude of the received signal, the amplitude of the internal signal used for matched filtering, the correlation between the two signals, and the relative phase between them. In this paper, we derive estimators for these four parameters using two techniques. The first is based on minimizing the Frobenius norm between the structured covariance matrix and the sample covariance matrix; the second is maximum likelihood parameter estimation. The two techniques yield the same estimators. We then give probability density functions (PDFs) for all four estimators. Because some of these PDFs are quite complicated, we also provide approximate PDFs. Finally, we apply our results to the problem of target detection and derive expressions for the receiver operating characteristic curves of two different noise radar detectors.

Via

Figures and Tables:

Abstract:We derive a detector that optimizes the target detection performance of any single-input single-output noise radar satisfying the following properties: it transmits Gaussian noise, it retains an internal reference signal for matched filtering, all external noise is additive white Gaussian noise, and all signals are measured using heterodyne receivers. This class of radars, which we call noise-type radars, includes not only many types of standard noise radars, but also a type of quantum radar known as quantum two-mode squeezing radar. The detector, which we derive using the Neyman-Pearson lemma, is not practical because it requires foreknowledge of a target-dependent correlation coefficient that cannot be known beforehand. (It is, however, a natural standard of comparison for other detectors.) This motivates us to study the family of Neyman-Pearson-based detectors that result when the correlation coefficient is treated as a parameter. We derive the probability distribution of the Neyman-Pearson-based detectors when there is a mismatch between the pre-chosen parameter value and the true correlation coefficient. We then use this result to generate receiver operating characteristic curves. Finally, we apply our results to the case where the correlation coefficient is small. It turns out that the resulting detector is not only a good one, but that it has appeared previously in the quantum radar literature.

Via

Figures and Tables:

Abstract:Finding the largest cardinality feasible subset of an infeasible set of linear constraints is the Maximum Feasible Subsystem problem (MAX FS). Solving this problem is crucial in a wide range of applications such as machine learning and compressive sensing. Although MAX FS is NP-hard, useful heuristic algorithms exist, but these can be slow for large problems. We extend the existing heuristics for the case of dense constraint matrices to greatly increase their speed while preserving or improving solution quality. We test the extended algorithms on two applications that have dense constraint matrices: binary classification, and sparse recovery in compressive sensing. In both cases, speed is greatly increased with no loss of accuracy.

Via