Abstract:By representing any constraint-based causal learning algorithm via a placeholder property, we decompose the correctness condition into a part relating the distribution and the true causal graph, and a part that depends solely on the distribution. This provides a general framework to obtain correctness conditions for causal learning, and has the following implications. We provide exact correctness conditions for the PC algorithm, which are then related to correctness conditions of some other existing causal discovery algorithms. We show that the sparsest Markov representation condition is the weakest correctness condition resulting from existing notions of minimality for maximal ancestral graphs and directed acyclic graphs. We also reason that additional knowledge than just Pearl-minimality is necessary for causal learning beyond faithfulness.
Abstract:Causal intervention is an essential tool in causal inference. It is axiomatized under the rules of do-calculus in the case of structure causal models. We provide simple axiomatizations for families of probability distributions to be different types of interventional distributions. Our axiomatizations neatly lead to a simple and clear theory of causality that has several advantages: it does not need to make use of any modeling assumptions such as those imposed by structural causal models; it only relies on interventions on single variables; it includes most cases with latent variables and causal cycles; and more importantly, it does not assume the existence of an underlying true causal graph--in fact, a causal graph is a by-product of our theory. We show that, under our axiomatizations, the intervened distributions are Markovian to the defined intervened causal graphs, and an observed joint probability distribution is Markovian to the obtained causal graph; these results are consistent with the case of structural causal models, and as a result, the existing theory of causal inference applies. We also show that a large class of natural structural causal models satisfy the theory presented here.
Abstract:The paper formalizes constraint-based structure learning of the "true" causal graph from observed data when unobserved variables are also existent. We define a "generic" structure learning algorithm, which provides conditions that, under the faithfulness assumption, the output of all known exact algorithms in the literature must satisfy, and which outputs graphs that are Markov equivalent to the causal graph. More importantly, we provide clear assumptions, weaker than faithfulness, under which the same generic algorithm outputs Markov equivalent graphs to the causal graph. We provide the theory for the general class of models under the assumption that the distribution is Markovian to the true causal graph, and we specialize the definitions and results for structural causal models.