Abstract:Understanding and adhering to soft constraints is essential for safe and socially compliant autonomous driving. However, such constraints are often implicit, context-dependent, and difficult to specify explicitly. In this work, we present DRIVE, a novel framework for Dynamic Rule Inference and Verified Evaluation that models and evaluates human-like driving constraints from expert demonstrations. DRIVE leverages exponential-family likelihood modeling to estimate the feasibility of state transitions, constructing a probabilistic representation of soft behavioral rules that vary across driving contexts. These learned rule distributions are then embedded into a convex optimization-based planning module, enabling the generation of trajectories that are not only dynamically feasible but also compliant with inferred human preferences. Unlike prior approaches that rely on fixed constraint forms or purely reward-based modeling, DRIVE offers a unified framework that tightly couples rule inference with trajectory-level decision-making. It supports both data-driven constraint generalization and principled feasibility verification. We validate DRIVE on large-scale naturalistic driving datasets, including inD, highD, and RoundD, and benchmark it against representative inverse constraint learning and planning baselines. Experimental results show that DRIVE achieves 0.0% soft constraint violation rates, smoother trajectories, and stronger generalization across diverse driving scenarios. Verified evaluations further demonstrate the efficiency, explanability, and robustness of the framework for real-world deployment.
Abstract:We explored decision-making dynamics in social systems, referencing the 'herd behavior' from prior studies where individuals follow preceding choices without understanding the underlying reasons. While previous research highlighted a preference for the optimal choice without external influences, our study introduced principals or external guides, adding complexity to the decision-making process. The reliability of these principals significantly influenced decisions. Notably, even occasional trust in an unreliable principal could alter decision outcomes. Furthermore, when a principal's advice was purely random, heightened trust led to more decision errors. Our findings emphasize the need for caution when placing trust in decision-making contexts.