Alert button
Picture for Mikel Rodriguez

Mikel Rodriguez

Alert button

Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications

May 23, 2023
Micah Musser, Andrew Lohn, James X. Dempsey, Jonathan Spring, Ram Shankar Siva Kumar, Brenda Leong, Christina Liaghati, Cindy Martinez, Crystal D. Grant, Daniel Rohrer, Heather Frase, Jonathan Elliott, John Bansemer, Mikel Rodriguez, Mitt Regan, Rumman Chowdhury, Stefan Hermanek

In July 2022, the Center for Security and Emerging Technology (CSET) at Georgetown University and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center convened a workshop of experts to examine the relationship between vulnerabilities in artificial intelligence systems and more traditional types of software vulnerabilities. Topics discussed included the extent to which AI vulnerabilities can be handled under standard cybersecurity processes, the barriers currently preventing the accurate sharing of information about AI vulnerabilities, legal issues associated with adversarial attacks on AI systems, and potential areas where government support could improve AI vulnerability management and mitigation. This report is meant to accomplish two things. First, it provides a high-level discussion of AI vulnerabilities, including the ways in which they are disanalogous to other types of vulnerabilities, and the current state of affairs regarding information sharing and legal oversight of AI vulnerabilities. Second, it attempts to articulate broad recommendations as endorsed by the majority of participants at the workshop.

Viaarxiv icon

Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks

Jan 08, 2021
Marissa Dotter, Sherry Xie, Keith Manville, Josh Harguess, Colin Busho, Mikel Rodriguez

Figure 1 for Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks
Figure 2 for Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks
Figure 3 for Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks
Figure 4 for Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks

Machine Learning (ML) models are known to be vulnerable to adversarial inputs and researchers have demonstrated that even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible. These systems represent a target for bad actors. Their disruption can cause real physical and economic harm. When attacks on production ML systems occur, the ability to attribute the attack to the responsible threat group is a critical step in formulating a response and holding the attackers accountable. We pose the following question: can adversarially perturbed inputs be attributed to the particular methods used to generate the attack? In other words, is there a way to find a signal in these attacks that exposes the attack algorithm, model architecture, or hyperparameters used in the attack? We introduce the concept of adversarial attack attribution and create a simple supervised learning experimental framework to examine the feasibility of discovering attributable signals in adversarial attacks. We find that it is possible to differentiate attacks generated with different attack algorithms, models, and hyperparameters on both the CIFAR-10 and MNIST datasets.

* Accepted to RSEML Workshop at AAAI 2021 
Viaarxiv icon

Learning a Predictable and Generative Vector Representation for Objects

Aug 31, 2016
Rohit Girdhar, David F. Fouhey, Mikel Rodriguez, Abhinav Gupta

Figure 1 for Learning a Predictable and Generative Vector Representation for Objects
Figure 2 for Learning a Predictable and Generative Vector Representation for Objects
Figure 3 for Learning a Predictable and Generative Vector Representation for Objects
Figure 4 for Learning a Predictable and Generative Vector Representation for Objects

What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding.

* To appear in ECCV 2016. Project webpage: rohitgirdhar.github.io/GenerativePredictableVoxels/ 
Viaarxiv icon