Identifying Coarse-grained Independent Causal Mechanisms with Self-supervision

Research output: Contribution to journalConference articlepeer-review


Among the most effective methods for uncovering high dimensional unstructured data's generating mechanisms are techniques based on disentangling and learning independent causal mechanisms. However, to identify the disentangled model, previous methods need additional observable variables or do not provide identifiability results. In contrast, this work aims to design an identifiable generative model that approximates the underlying mechanisms from observational data using only self-supervision. Specifically, the generative model uses a degenerate mixture prior to learn mechanisms that generate or transform data. We outline sufficient conditions for an identifiable generative model up to three types of transformations that preserve a coarse-grained disentanglement. Moreover, we propose a self-supervised training method based on these identifiability conditions. We validate our approach on MNIST, FashionMNIST, and Sprites datasets, showing that the proposed method identifies disentangled models - by visualization and evaluating the downstream predictive model's accuracy under environment shifts.

Original languageEnglish (US)
Pages (from-to)877-903
Number of pages27
JournalProceedings of Machine Learning Research
StatePublished - 2022
Event1st Conference on Causal Learning and Reasoning, CLeaR 2022 - Eureka, United States
Duration: Apr 11 2022Apr 13 2022


  • Causal mechanisms
  • disentanglement
  • generative model
  • identifiability

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability


Dive into the research topics of 'Identifying Coarse-grained Independent Causal Mechanisms with Self-supervision'. Together they form a unique fingerprint.

Cite this