Causality and Semantic Separation

Anna Zhang, London Bielicke, Eunice Jun, Adam Chlipala. Causality and Semantic Separation. Proceedings of the 46th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI'26). June 2026. Conditionally accepted.

Coming soon!

The design of scientific experiments deserves its own variation of formal verification to catch cases where scientists made important mistakes, such as forgetting to take confounding variables into account. One of the most fundamental underpinnings of science is causality, or what it means for interventions in the world to cause other outcomes, as formalized by computer scientists like Judea Pearl. However, these ideas had not previously been made rigorous to the standards of the programming-languages community, where one expects a (syntactic) program analysis to be proved sound with respect to a natural semantics. In the domain of causality, as the relevant "program analysis," we focused on d-separation, a classic (and arguably unintuitive) condition on graphs that can be used to decide when the design of an experiment controls for sufficiently many confounding variables. Our central result (mechanized in Rocq) is that d-separation exactly coincides with a novel (and arguably intuitive) semantic definition inspired by noninterference from the theory of security. For each given automated test on the quality of an experiment design, our theorem justifies an associated method for falsifying the world-modeling hypothesis behind the experiment.