Adversarial contrastive learning (ACL) does not require expensive data annotations but outputs a robust representation that withstands adversarial attacks and also generalizes to a wide range of downstream tasks. However, ACL needs tremendous running time to generate the adversarial variants of all training data, which limits its scalability to large datasets. To speed up ACL, this paper proposes a robustness-aware coreset selection (RCS) method. RCS does not require label information and searches for an informative subset that minimizes a representational divergence, which is the distance of the representation between natural data and their virtual adversarial variants. The vanilla solution of RCS via traversing all possible subsets is computationally prohibitive. Therefore, we theoretically transform RCS into a surrogate problem of submodular maximization, of which the greedy search is an efficient solution with an optimality guarantee for the original problem. Empirically, our comprehensive results corroborate that RCS can speed up ACL by a large margin without significantly hurting the robustness and standard transferability. Notably, to the best of our knowledge, we are the first to conduct ACL efficiently on the large-scale ImageNet-1K dataset to obtain an effective robust representation via RCS.
Gas leakage is a critical problem in the industrial sector, residential structures, and gas-powered vehicles; installing gas leakage detection systems is one of the preventative strategies for reducing hazards caused by gas leakage. Conventional gas sensors, such as electrochemical, infrared point, and MOS sensors, have traditionally been used to detect leaks. The challenge with these sensors is their versatility in settings involving many gases, as well as their exorbitant cost and scalability. As a result, several gas detection approaches were explored. Our approach utilizes 40 KHz ultrasound signal for gas detection. Here, the reflected signal has been analyzed to detect gas leaks and identify gas in real-time, providing a quick, reliable solution for gas leak detection in industrial environments. The electronics and sensors used are both low-cost and easily scalable. The system incorporates commonly accessible materials and off-the-shelf components, making it suitable for use in a variety of contexts. They are also more effective at detecting numerous gas leaks and has a longer lifetime. Butane was used to test our system. The breaches were identified in 0.01 seconds after permitting gas to flow from a broken pipe, whereas identifying the gas took 0.8 seconds
The deployment of 3D detectors strikes one of the major challenges in real-world self-driving scenarios. Existing BEV-based (i.e., Bird Eye View) detectors favor sparse convolution (known as SPConv) to speed up training and inference, which puts a hard barrier for deployment especially for on-device applications. In this paper, we tackle the problem of efficient 3D object detection from LiDAR point clouds with deployment in mind. To reduce computational burden, we propose a pillar-based 3D detector with high performance from an industry perspective, termed FastPillars. Compared with previous methods, we introduce a more effective Max-and-Attention pillar encoding (MAPE) module, and redesigning a powerful and lightweight backbone CRVNet imbued with Cross Stage Partial network (CSP) in a reparameterization style, forming a compact feature representation framework. Extensive experiments demonstrate that our FastPillars surpasses the state-of-the-art 3D detectors regarding both on-device speed and performance. Specifically, FastPillars can be effectively deployed through TensorRT, obtaining real-time performance (24FPS) on a single RTX3070Ti GPU with 64.6 mAP on the nuScenes test set. Our code is publicly available at: https://github.com/StiphyJay/FastPillars.
Deep Neural Networks (DNN) have been shown to be vulnerable to adversarial examples. Adversarial training (AT) is a popular and effective strategy to defend against adversarial attacks. Recent works (Benz et al., 2020; Xu et al., 2021; Tian et al., 2021) have shown that a robust model well-trained by AT exhibits a remarkable robustness disparity among classes, and propose various methods to obtain consistent robust accuracy across classes. Unfortunately, these methods sacrifice a good deal of the average robust accuracy. Accordingly, this paper proposes a novel framework of worst-class adversarial training and leverages no-regret dynamics to solve this problem. Our goal is to obtain a classifier with great performance on worst-class and sacrifice just a little average robust accuracy at the same time. We then rigorously analyze the theoretical properties of our proposed algorithm, and the generalization error bound in terms of the worst-class robust risk. Furthermore, we propose a measurement to evaluate the proposed method in terms of both the average and worst-class accuracies. Experiments on various datasets and networks show that our proposed method outperforms the state-of-the-art approaches.
Hybrid ventilation (coupling natural and mechanical ventilation) is an energy-efficient solution to provide fresh air for most climates, given that it has a reliable control system. To operate such systems optimally, a high-fidelity control-oriented model is required. It should enable near-real time forecast of the indoor air temperature and humidity based on operational conditions such as window opening and HVAC schedules. However, widely used physics-based simulation models (i.e., white-box models) are labour-intensive and computationally expensive. Alternatively, black-box models based on artificial neural networks can be trained to be good estimators for building dynamics. This paper investigates the capabilities of a multivariate multi-head attention-based long short-term memory (LSTM) encoder-decoder neural network to predict indoor air conditions of a building equipped with hybrid ventilation. The deep neural network used for this study aims to predict indoor air temperature dynamics when a window is opened and closed, respectively. Training and test data were generated from detailed multi-zone office building model (EnergyPlus). The deep neural network is able to accurately predict indoor air temperature of five zones whenever a window was opened and closed.
The purpose of the study is to investigate potential benefits of using Alamouti-like orthogonal space-time-frequency block codes (STFBC) in distributed multiple-input multiple-output (D-MIMO) systems to increase the diversity at the UE side when instantaneous channel state information (CSI) is not available at radio units (RUs). Most of the existing transmission techniques require instantaneous CSI to form precoders which can only be realized together with accurate and up-to-date channel knowledge. STFBC can increase the diversity at UE side without estimating the downlink channel. Under challenging channel conditions, the network can switch to a robust mode where a certain data rate is maintained for users even without knowing the channel coefficients by means of STFBC. In this study, it will be mainly focused on clustering of RUs and user equipment, where each cluster adopts a possibly different orthogonal code, so that overall spectral efficiency is optimized. Potential performance gains over known techniques that can be used when the channel is not known will be shown and performance gaps to sophisticated precoders making use of channel estimates will be identified.
In this work, we demonstrate the viability of using federated learning to successfully predict energy consumption as well as solar production for all households within a certain network using low-power and low-space consuming embedded devices. We also demonstrate our prediction performance improving over time without the need for sharing private consumer energy data. We simulate a system with four nodes using data for one year to show this.
It has long been known that photonic science and especially photonic communications can raise the speed of technologies and producing manufacturing. More recently, photonic science has also been interested in its capabilities to implement low-precision linear operations, such as matrix multiplications, fast and effciently. For a long time most scientists taught that Electronics is the end of science but after many years and about 35 years ago had been understood that electronics do not answer alone and should have a new science. Today we face modern ways and instruments for doing tasks as soon as possible in proportion to many decays before. The velocity of progress in science is very fast. All our progress in science area is dependent on modern knowledge about new methods. In this research, we want to review the concept of a photonic neural network. For this research was selected 18 main articles were among the main 30 articles on this subject from 2015 to the 2022 year. These articles noticed three principles: 1- Experimental concepts, 2- Theoretical concepts, and, finally 3- Mathematic concepts. We should be careful with this research because mathematics has a very important and constructive role in our topics! One of the topics that are very valid and also new, is simulation. We used to work with simulation in some parts of this research. First, briefly, we start by introducing photonics and neural networks. In the second we explain the advantages and disadvantages of a combination of both in the science world and industries and technologies about them. Also, we are talking about the achievements of a thin modern science. Third, we try to introduce some important and valid parameters in neural networks. In this manner, we use many mathematic tools in some portions of this article.
We study the complexity of optimizing nonsmooth nonconvex Lipschitz functions by producing $(\delta,\epsilon)$-stationary points. Several recent works have presented randomized algorithms that produce such points using $\tilde O(\delta^{-1}\epsilon^{-3})$ first-order oracle calls, independent of the dimension $d$. It has been an open problem as to whether a similar result can be obtained via a deterministic algorithm. We resolve this open problem, showing that randomization is necessary to obtain a dimension-free rate. In particular, we prove a lower bound of $\Omega(d)$ for any deterministic algorithm. Moreover, we show that unlike smooth or convex optimization, access to function values is required for any deterministic algorithm to halt within any finite time. On the other hand, we prove that if the function is even slightly smooth, then the dimension-free rate of $\tilde O(\delta^{-1}\epsilon^{-3})$ can be obtained by a deterministic algorithm with merely a logarithmic dependence on the smoothness parameter. Motivated by these findings, we turn to study the complexity of deterministically smoothing Lipschitz functions. Though there are efficient black-box randomized smoothings, we start by showing that no such deterministic procedure can smooth functions in a meaningful manner, resolving an open question. We then bypass this impossibility result for the structured case of ReLU neural networks. To that end, in a practical white-box setting in which the optimizer is granted access to the network's architecture, we propose a simple, dimension-free, deterministic smoothing that provably preserves $(\delta,\epsilon)$-stationary points. Our method applies to a variety of architectures of arbitrary depth, including ResNets and ConvNets. Combined with our algorithm, this yields the first deterministic dimension-free algorithm for optimizing ReLU networks, circumventing our lower bound.
When building datasets, one needs to invest time, money and energy to either aggregate more data or to improve their quality. The most common practice favors quantity over quality without necessarily quantifying the trade-off that emerges. In this work, we study data-driven contextual decision-making and the performance implications of quality and quantity of data. We focus on contextual decision-making with a Newsvendor loss. This loss is that of a central capacity planning problem in Operations Research, but also that associated with quantile regression. We consider a model in which outcomes observed in similar contexts have similar distributions and analyze the performance of a classical class of kernel policies which weigh data according to their similarity in a contextual space. We develop a series of results that lead to an exact characterization of the worst-case expected regret of these policies. This exact characterization applies to any sample size and any observed contexts. The model we develop is flexible, and captures the case of partially observed contexts. This exact analysis enables to unveil new structural insights on the learning behavior of uniform kernel methods: i) the specialized analysis leads to very large improvements in quantification of performance compared to state of the art general purpose bounds. ii) we show an important non-monotonicity of the performance as a function of data size not captured by previous bounds; and iii) we show that in some regimes, a little increase in the quality of the data can dramatically reduce the amount of samples required to reach a performance target. All in all, our work demonstrates that it is possible to quantify in a precise fashion the interplay of data quality and quantity, and performance in a central problem class. It also highlights the need for problem specific bounds in order to understand the trade-offs at play.