In the cybersecurity setting, defenders are often at the mercy of their detection technologies and subject to the information and experiences that individual analysts have. In order to give defenders an advantage, it is important to understand an attacker's motivation and their likely next best action. As a first step in modeling this behavior, we introduce a security game framework that simulates interplay between attackers and defenders in a noisy environment, focusing on the factors that drive decision making for attackers and defenders in the variants of the game with full knowledge and observability, knowledge of the parameters but no observability of the state (``partial knowledge''), and zero knowledge or observability (``zero knowledge''). We demonstrate the importance of making the right assumptions about attackers, given significant differences in outcomes. Furthermore, there is a measurable trade-off between false-positives and true-positives in terms of attacker outcomes, suggesting that a more false-positive prone environment may be acceptable under conditions where true-positives are also higher.
Explainability in machine learning has become incredibly important as machine learning-powered systems become ubiquitous and both regulation and public sentiment begin to demand an understanding of how these systems make decisions. As a result, a number of explanation methods have begun to receive widespread adoption. This work summarizes, compares, and contrasts three popular explanation methods: LIME, SmoothGrad, and SHAP. We evaluate these methods with respect to: robustness, in the sense of sample complexity and stability; understandability, in the sense that provided explanations are consistent with user expectations; and usability, in the sense that the explanations allow for the model to be modified based on the output. This work concludes that current explanation methods are insufficient; that putting faith in and adopting these methods may actually be worse than simply not using them.
Legislation and public sentiment throughout the world have promoted fairness metrics, explainability, and interpretability as prescriptions for the responsible development of ethical artificial intelligence systems. Despite the importance of these three pillars in the foundation of the field, they can be challenging to operationalize and attempts to solve the problems in production environments often feel Sisyphean. This difficulty stems from a number of factors: fairness metrics are computationally difficult to incorporate into training and rarely alleviate all of the harms perpetrated by these systems. Interpretability and explainability can be gamed to appear fair, may inadvertently reduce the privacy of personal information contained in training data, and increase user confidence in predictions -- even when the explanations are wrong. In this work, we propose a framework for responsibly developing artificial intelligence systems by incorporating lessons from the field of information security and the secure development lifecycle to overcome challenges associated with protecting users in adversarial settings. In particular, we propose leveraging the concepts of threat modeling, design review, penetration testing, and incident response in the context of developing AI systems as ways to resolve shortcomings in the aforementioned methods.
In cybersecurity, attackers range from brash, unsophisticated script kiddies and cybercriminals to stealthy, patient advanced persistent threats. When modeling these attackers, we can observe that they demonstrate different risk-seeking and risk-averse behaviors. This work explores how an attacker's risk seeking or risk averse behavior affects their operations against detection-optimizing defenders in an Internet of Things ecosystem. Using an evaluation framework which uses real, parametrizable malware, we develop a game that is played by a defender against attackers with a suite of malware that is parameterized to be more aggressive and more stealthy. These results are evaluated under a framework of exponential utility according to their willingness to accept risk. We find that against a defender who must choose a single strategy up front, risk-seeking attackers gain more actual utility than risk-averse attackers, particularly in cases where the defender is better equipped than the two attackers anticipate. Additionally, we empirically confirm that high-risk, high-reward scenarios are more beneficial to risk-seeking attackers like cybercriminals, while low-risk, low-reward scenarios are more beneficial to risk-averse attackers like advanced persistent threats.
In many cases, neural networks perform well on test data, but tend to overestimate their confidence on out-of-distribution data. This has led to adoption of Bayesian neural networks, which better capture uncertainty and therefore more accurately reflect the model's confidence. For machine learning security researchers, this raises the natural question of how making a model Bayesian affects the security of the model. In this work, we explore the interplay between Bayesianism and two measures of security: model privacy and adversarial robustness. We demonstrate that Bayesian neural networks are more vulnerable to membership inference attacks in general, but are at least as robust as their non-Bayesian counterparts to adversarial examples.
The 3rd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field's ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is "The Abuse and Misogynoir Playbook," written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood (Assistant Professor, Program in Media Arts and Sciences; Assistant Professor, Aeronautics and Astronautics; Lead, Space Enabled Research Group, MIT) and Dr. Catherine D'Ignazio (Assistant Professor, Urban Science and Planning; Director, Data + Feminism Lab, MIT). The piece (and accompanying infographic), is a deep-dive into the historical and systematic silencing, erasure, and revision of Black women's contributions to knowledge and scholarship in the United Stations, and globally. Exposing and countering this Playbook has become increasingly important following the firing of AI Ethics expert Dr. Timnit Gebru (and several of her supporters) at Google. This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
Differentially private models seek to protect the privacy of data the model is trained on, making it an important component of model security and privacy. At the same time, data scientists and machine learning engineers seek to use uncertainty quantification methods to ensure models are as useful and actionable as possible. We explore the tension between uncertainty quantification via dropout and privacy by conducting membership inference attacks against models with and without differential privacy. We find that models with large dropout slightly increases a model's risk to succumbing to membership inference attacks in all cases including in differentially private models.
The 2nd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: AI and society, bias and algorithmic justice, disinformation, humans and AI, labor impacts, privacy, risk, and future of AI ethics. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. These experts include: Danit Gal (Tech Advisor, United Nations), Amba Kak (Director of Global Policy and Programs, NYU's AI Now Institute), Rumman Chowdhury (Global Lead for Responsible AI, Accenture), Brent Barron (Director of Strategic Projects and Knowledge Management, CIFAR), Adam Murray (U.S. Diplomat working on tech policy, Chair of the OECD Network on AI), Thomas Kochan (Professor, MIT Sloan School of Management), and Katya Klinova (AI and Economy Program Lead, Partnership on AI). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
The attention that deep learning has garnered from the academic community and industry continues to grow year over year, and it has been said that we are in a new golden age of artificial intelligence research. However, neural networks are still often seen as a "black box" where learning occurs but cannot be understood in a human-interpretable way. Since these machine learning systems are increasingly being adopted in security contexts, it is important to explore these interpretations. We consider an Android malware traffic dataset for approaching this problem. Then, using the information plane, we explore how homeomorphism affects learned representation of the data and the invariance of the mutual information captured by the parameters on that data. We empirically validate these results, using accuracy as a second measure of similarity of learned representations. Our results suggest that although the details of learned representations and the specific coordinate system defined over the manifold of all parameters differ slightly, the functional approximations are the same. Furthermore, our results show that since mutual information remains invariant under homeomorphism, only feature engineering methods that alter the entropy of the dataset will change the outcome of the neural network. This means that for some datasets and tasks, neural networks require meaningful, human-driven feature engineering or changes in architecture to provide enough information for the neural network to generate a sufficient statistic. Applying our results can serve to guide analysis methods for machine learning engineers and suggests that neural networks that can exploit the convolution theorem are equally accurate as standard convolutional neural networks, and can be more computationally efficient.
Security and ethics are both core to ensuring that a machine learning system can be trusted. In production machine learning, there is generally a hand-off from those who build a model to those who deploy a model. In this hand-off, the engineers responsible for model deployment are often not privy to the details of the model and thus, the potential vulnerabilities associated with its usage, exposure, or compromise. Techniques such as model theft, model inversion, or model misuse may not be considered in model deployment, and so it is incumbent upon data scientists and machine learning engineers to understand these potential risks so they can communicate them to the engineers deploying and hosting their models. This is an open problem in the machine learning community and in order to help alleviate this issue, automated systems for validating privacy and security of models need to be developed, which will help to lower the burden of implementing these hand-offs and increasing the ubiquity of their adoption.