Abstract:Ensuring safe operation of safety-critical complex systems interacting with their environment poses significant challenges, particularly when the system's world model relies on machine learning algorithms to process the perception input. A comprehensive safety argumentation requires knowledge of how faults or functional insufficiencies propagate through the system and interact with external factors, to manage their safety impact. While statistical analysis approaches can support the safety assessment, associative reasoning alone is neither sufficient for the safety argumentation nor for the identification and investigation of safety measures. A causal understanding of the system and its interaction with the environment is crucial for safeguarding safety-critical complex systems. It allows to transfer and generalize knowledge, such as insights gained from testing, and facilitates the identification of potential improvements. This work explores using causal Bayesian networks to model the system's causalities for safety analysis, and proposes measures to assess causal influences based on Pearl's framework of causal inference. We compare the approach of causal Bayesian networks to the well-established fault tree analysis, outlining advantages and limitations. In particular, we examine importance metrics typically employed in fault tree analysis as foundation to discuss suitable causal metrics. An evaluation is performed on the example of a perception system for automated driving. Overall, this work presents an approach for causal reasoning in safety analysis that enables the integration of data-driven and expert-based knowledge to account for uncertainties arising from complex systems operating in open environments.
Abstract:Automated driving systems are safety-critical cyber-physical systems whose safety of the intended functionality (SOTIF) can not be assumed without proper argumentation based on appropriate evidences. Recent advances in standards and regulations on the safety of driving automation are therefore intensely concerned with demonstrating that the intended functionality of these systems does not introduce unreasonable risks to stakeholders. In this work, we critically analyze the ISO 21448 standard which contains requirements and guidance on how the SOTIF can be provably validated. Emphasis lies on developing a consistent terminology as a basis for the subsequent definition of a validation strategy when using quantitative acceptance criteria. In the broad picture, we aim to achieve a well-defined risk decomposition that enables rigorous, quantitative validation approaches for the SOTIF of automated driving systems.
Abstract:The verification and validation of automated driving systems at SAE levels 4 and 5 is a multi-faceted challenge for which classical statistical considerations become infeasible. For this, contemporary approaches suggest a decomposition into scenario classes combined with statistical analysis thereof regarding the emergence of criticality. Unfortunately, these associational approaches may yield spurious inferences, or worse, fail to recognize the causalities leading to critical scenarios, which are, in turn, prerequisite for the development and safeguarding of automated driving systems. As to incorporate causal knowledge within these processes, this work introduces a formalization of causal queries whose answers facilitate a causal understanding of safety-relevant influencing factors for automated driving. This formalized causal knowledge can be used to specify and implement abstract safety principles that provably reduce the criticality associated with these influencing factors. Based on Judea Pearl's causal theory, we define a causal relation as a causal structure together with a context, both related to a domain ontology, where the focus lies on modeling the effect of such influencing factors on criticality as measured by a suitable metric. As to assess modeling quality, we suggest various quantities and evaluate them on a small example. As availability and quality of data are imperative for validly estimating answers to the causal queries, we also discuss requirements on real-world and synthetic data acquisition. We thereby contribute to establishing causal considerations at the heart of the safety processes that are urgently needed as to ensure the safe operation of automated driving systems.