Alert button
Picture for Nikos Arechiga

Nikos Arechiga

Alert button

Training Towards Critical Use: Learning to Situate AI Predictions Relative to Human Knowledge

Aug 30, 2023
Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Matthew Lee, Scott Carter, Nikos Arechiga, Kate Glazko, Haiyi Zhu, Kenneth Holstein

A growing body of research has explored how to support humans in making better use of AI-based decision support, including via training and onboarding. Existing research has focused on decision-making tasks where it is possible to evaluate "appropriate reliance" by comparing each decision against a ground truth label that cleanly maps to both the AI's predictive target and the human decision-maker's goals. However, this assumption does not hold in many real-world settings where AI tools are deployed today (e.g., social work, criminal justice, and healthcare). In this paper, we introduce a process-oriented notion of appropriate reliance called critical use that centers the human's ability to situate AI predictions against knowledge that is uniquely available to them but unavailable to the AI model. To explore how training can support critical use, we conduct a randomized online experiment in a complex social decision-making setting: child maltreatment screening. We find that, by providing participants with accelerated, low-stakes opportunities to practice AI-assisted decision-making in this setting, novices came to exhibit patterns of disagreement with AI that resemble those of experienced workers. A qualitative examination of participants' explanations for their AI-assisted decisions revealed that they drew upon qualitative case narratives, to which the AI model did not have access, to learn when (not) to rely on AI predictions. Our findings open new questions for the study and design of training for real-world AI-assisted decision-making.

Viaarxiv icon

Drag-guided diffusion models for vehicle image generation

Jun 16, 2023
Nikos Arechiga, Frank Permenter, Binyang Song, Chenyang Yuan

Figure 1 for Drag-guided diffusion models for vehicle image generation
Figure 2 for Drag-guided diffusion models for vehicle image generation
Figure 3 for Drag-guided diffusion models for vehicle image generation
Figure 4 for Drag-guided diffusion models for vehicle image generation

Denoising diffusion models trained at web-scale have revolutionized image generation. The application of these tools to engineering design is an intriguing possibility, but is currently limited by their inability to parse and enforce concrete engineering constraints. In this paper, we take a step towards this goal by proposing physics-based guidance, which enables optimization of a performance metric (as predicted by a surrogate model) during the generation process. As a proof-of-concept, we add drag guidance to Stable Diffusion, which allows this tool to generate images of novel vehicles while simultaneously minimizing their predicted drag coefficients.

Viaarxiv icon

Surrogate Modeling of Car Drag Coefficient with Depth and Normal Renderings

May 26, 2023
Binyang Song, Chenyang Yuan, Frank Permenter, Nikos Arechiga, Faez Ahmed

Figure 1 for Surrogate Modeling of Car Drag Coefficient with Depth and Normal Renderings
Figure 2 for Surrogate Modeling of Car Drag Coefficient with Depth and Normal Renderings
Figure 3 for Surrogate Modeling of Car Drag Coefficient with Depth and Normal Renderings
Figure 4 for Surrogate Modeling of Car Drag Coefficient with Depth and Normal Renderings

Generative AI models have made significant progress in automating the creation of 3D shapes, which has the potential to transform car design. In engineering design and optimization, evaluating engineering metrics is crucial. To make generative models performance-aware and enable them to create high-performing designs, surrogate modeling of these metrics is necessary. However, the currently used representations of three-dimensional (3D) shapes either require extensive computational resources to learn or suffer from significant information loss, which impairs their effectiveness in surrogate modeling. To address this issue, we propose a new two-dimensional (2D) representation of 3D shapes. We develop a surrogate drag model based on this representation to verify its effectiveness in predicting 3D car drag. We construct a diverse dataset of 9,070 high-quality 3D car meshes labeled by drag coefficients computed from computational fluid dynamics (CFD) simulations to train our model. Our experiments demonstrate that our model can accurately and efficiently evaluate drag coefficients with an $R^2$ value above 0.84 for various car categories. Moreover, the proposed representation method can be generalized to many other product categories beyond cars. Our model is implemented using deep neural networks, making it compatible with recent AI image generation tools (such as Stable Diffusion) and a significant step towards the automatic generation of drag-optimized car designs. We have made the dataset and code publicly available at https://decode.mit.edu/projects/dragprediction/.

Viaarxiv icon

Second-Order Sensitivity Analysis for Bilevel Optimization

May 04, 2022
Robert Dyro, Edward Schmerling, Nikos Arechiga, Marco Pavone

Figure 1 for Second-Order Sensitivity Analysis for Bilevel Optimization
Figure 2 for Second-Order Sensitivity Analysis for Bilevel Optimization
Figure 3 for Second-Order Sensitivity Analysis for Bilevel Optimization
Figure 4 for Second-Order Sensitivity Analysis for Bilevel Optimization

In this work we derive a second-order approach to bilevel optimization, a type of mathematical programming in which the solution to a parameterized optimization problem (the "lower" problem) is itself to be optimized (in the "upper" problem) as a function of the parameters. Many existing approaches to bilevel optimization employ first-order sensitivity analysis, based on the implicit function theorem (IFT), for the lower problem to derive a gradient of the lower problem solution with respect to its parameters; this IFT gradient is then used in a first-order optimization method for the upper problem. This paper extends this sensitivity analysis to provide second-order derivative information of the lower problem (which we call the IFT Hessian), enabling the usage of faster-converging second-order optimization methods at the upper level. Our analysis shows that (i) much of the computation already used to produce the IFT gradient can be reused for the IFT Hessian, (ii) errors bounds derived for the IFT gradient readily apply to the IFT Hessian, (iii) computing IFT Hessians can significantly reduce overall computation by extracting more information from each lower level solve. We corroborate our findings and demonstrate the broad range of applications of our method by applying it to problem instances of least squares hyperparameter auto-tuning, multi-class SVM auto-tuning, and inverse optimal control.

* Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:9166-9181, 2022  
* 16 pages, 6 figures 
Viaarxiv icon

Accelerating Understanding of Scientific Experiments with End to End Symbolic Regression

Dec 07, 2021
Nikos Arechiga, Francine Chen, Yan-Ying Chen, Yanxia Zhang, Rumen Iliev, Heishiro Toyoda, Kent Lyons

Figure 1 for Accelerating Understanding of Scientific Experiments with End to End Symbolic Regression
Figure 2 for Accelerating Understanding of Scientific Experiments with End to End Symbolic Regression
Figure 3 for Accelerating Understanding of Scientific Experiments with End to End Symbolic Regression
Figure 4 for Accelerating Understanding of Scientific Experiments with End to End Symbolic Regression

We consider the problem of learning free-form symbolic expressions from raw data, such as that produced by an experiment in any scientific domain. Accurate and interpretable models of scientific phenomena are the cornerstone of scientific research. Simple yet interpretable models, such as linear or logistic regression and decision trees often lack predictive accuracy. Alternatively, accurate blackbox models such as deep neural networks provide high predictive accuracy, but do not readily admit human understanding in a way that would enrich the scientific theory of the phenomenon. Many great breakthroughs in science revolve around the development of parsimonious equational models with high predictive accuracy, such as Newton's laws, universal gravitation, and Maxwell's equations. Previous work on automating the search of equational models from data combine domain-specific heuristics as well as computationally expensive techniques, such as genetic programming and Monte-Carlo search. We develop a deep neural network (MACSYMA) to address the symbolic regression problem as an end-to-end supervised learning problem. MACSYMA can generate symbolic expressions that describe a dataset. The computational complexity of the task is reduced to the feedforward computation of a neural network. We train our neural network on a synthetic dataset consisting of data tables of varying length and varying levels of noise, for which the neural network must learn to produce the correct symbolic expression token by token. Finally, we validate our technique by running on a public dataset from behavioral science.

Viaarxiv icon

Certified Control: An Architecture for Verifiable Safety of Autonomous Vehicles

Mar 29, 2021
Daniel Jackson, Valerie Richmond, Mike Wang, Jeff Chow, Uriel Guajardo, Soonho Kong, Sergio Campos, Geoffrey Litt, Nikos Arechiga

Figure 1 for Certified Control: An Architecture for Verifiable Safety of Autonomous Vehicles
Figure 2 for Certified Control: An Architecture for Verifiable Safety of Autonomous Vehicles
Figure 3 for Certified Control: An Architecture for Verifiable Safety of Autonomous Vehicles
Figure 4 for Certified Control: An Architecture for Verifiable Safety of Autonomous Vehicles

Widespread adoption of autonomous cars will require greater confidence in their safety than is currently possible. Certified control is a new safety architecture whose goal is two-fold: to achieve a very high level of safety, and to provide a framework for justifiable confidence in that safety. The key idea is a runtime monitor that acts, along with sensor hardware and low-level control and actuators, as a small trusted base, ensuring the safety of the system as a whole. Unfortunately, in current systems complex perception makes the verification even of a runtime monitor challenging. Unlike traditional runtime monitoring, therefore, a certified control monitor does not perform perception and analysis itself. Instead, the main controller assembles evidence that the proposed action is safe into a certificate that is then checked independently by the monitor. This exploits the classic gap between the costs of finding and checking. The controller is assigned the task of finding the certificate, and can thus use the most sophisticated algorithms available (including learning-enabled software); the monitor is assigned only the task of checking, and can thus run quickly and be smaller and formally verifiable. This paper explains the key ideas of certified control and illustrates them with a certificate for LiDAR data and its formal verification. It shows how the architecture dramatically reduces the amount of code to be verified, providing an end-to-end safety analysis that would likely not be achievable in a traditional architecture.

* 18 pages + 15 page Appendix, 11 figures 
Viaarxiv icon

Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization

Jun 29, 2020
Kaidi Cao, Yining Chen, Junwei Lu, Nikos Arechiga, Adrien Gaidon, Tengyu Ma

Figure 1 for Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization
Figure 2 for Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization
Figure 3 for Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization
Figure 4 for Heteroskedastic and Imbalanced Deep Learning with Adaptive Regularization

Real-world large-scale datasets are heteroskedastic and imbalanced -- labels have varying levels of uncertainty and label distributions are long-tailed. Heteroskedasticity and imbalance challenge deep learning algorithms due to the difficulty of distinguishing among mislabeled, ambiguous, and rare examples. Addressing heteroskedasticity and imbalance simultaneously is under-explored. We propose a data-dependent regularization technique for heteroskedastic datasets that regularizes different regions of the input space differently. Inspired by the theoretical derivation of the optimal regularization strength in a one-dimensional nonparametric classification setting, our approach adaptively regularizes the data points in higher-uncertainty, lower-density regions more heavily. We test our method on several benchmark tasks, including a real-world heteroskedastic and imbalanced dataset, WebVision. Our experiments corroborate our theory and demonstrate a significant improvement over other methods in noise-robust deep learning.

Viaarxiv icon

Better AI through Logical Scaffolding

Sep 12, 2019
Nikos Arechiga, Jonathan DeCastro, Soonho Kong, Karen Leung

Figure 1 for Better AI through Logical Scaffolding
Figure 2 for Better AI through Logical Scaffolding

We describe the concept of logical scaffolds, which can be used to improve the quality of software that relies on AI components. We explain how some of the existing ideas on runtime monitors for perception systems can be seen as a specific instance of logical scaffolds. Furthermore, we describe how logical scaffolds may be useful for improving AI programs beyond perception systems, to include general prediction systems and agent behavior models.

* CAV Workshop on Formal Methods for ML-enabled Autonomous Systems 2019 
Viaarxiv icon

Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss

Jun 18, 2019
Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, Tengyu Ma

Figure 1 for Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
Figure 2 for Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
Figure 3 for Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss
Figure 4 for Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss

Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes. We design two novel methods to improve performance in such scenarios. First, we propose a theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound. This loss replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling. Second, we propose a simple, yet effective, training schedule that defers re-weighting until after the initial stage, allowing the model to learn an initial representation while avoiding some of the complications associated with re-weighting or re-sampling. We test our methods on several benchmark vision tasks including the real-world imbalanced dataset iNaturalist 2018. Our experiments show that either of these methods alone can already improve over existing techniques and their combination achieves even better performance gains.

Viaarxiv icon