For AI and ML systems aimed to assist decision-making in real-world scenarios, it is crucial to perform complex reasoning under uncertainty reliably and efficiently. However, contemporary machine learning is often criticized as being sensitive to data-perturbations, lacking guarantees on their predictions, and having little or no causal and symbolic reasoning capabilities. Further underpinning the relevance of these challenges, various regulatory bodies have released statements and frameworks aimed at building trustworthy AI.

The field of tractable probabilistic models (TPMs) is a very appealing approach for many of these challenges, as TPMs enable reliable (exact or coming with approximation guarantees) and efficient reasoning for a wide range of tasks, by design. The spectrum of TPMs consists of a wide variety of techniques including models with tractable likelihoods (e.g., normalizing flow and autoregressive models), tractable marginals (e.g., bounded-treewidth models and determinantal point processes), and more complex tractable reasoning tasks (e.g., circuits) and is dynamically evolving.

This year’s workshop on Tractable Probabilistic Modeling aims to highlight the connection between trustworthy artificial intelligence (in a broad sense) and tractable reasoning. The workshop aims to highlight recent advancements in the field of TPMs and their potential impact in settings of trustworthy AI (e.g. fairness, robustness, causality, neuro-symbolic AI).

Confirmed Speakers

Workshop Program

The workshop will happen in person at the Universitat Pompeu Fabra, Barcelona, Spain on July 19th, 2024.

Details TBA.

Call for Papers

We invite three types of submissions:

  1. Novel research on tractable probabilistic modeling
  2. Retrospective papers discussing the impact, consequences, and lessons learned
  3. Recently accepted papers on tractable probabilistic modeling (in the original format and length)

Topics of interest

Topics of interest include, but are not limited to:

  • New tractable representations in logical, continuous, and hybrid domains
  • Learning algorithms for TPMs
  • Theoretical and empirical analysis of TPMs
  • Connections between TPM classes
  • TPMs for responsible, robust, and explainable AI
  • TPMs for trustworthy ML
  • Approximate inference algorithms with guarantees
  • Successful applications of TPMs to real-world problems

Submission Instructions

Original papers and retrospective papers are required to follow the style guidelines of UAI and should use the adjusted template TPM format. Submitted papers should be up to 4 pages long, excluding references. Already accepted papers can be submitted in the format of the venue they have been accepted to. Supplementary material can be put in the same pdf paper (after references); it is entirely up to the reviewers to decide whether they wish to consult this additional material.

All submissions must be electronic (through the link below), and must closely follow the formatting guidelines in the templates, otherwise, they will automatically be rejected. Reviewing for TPM is single-blind; i.e., reviewers will know the authors’ identity but authors won’t know the reviewers’ identity. However, we recommend that you refer to your prior work in the third person wherever possible. We also encourage links to public repositories such as GitHub to share code and/or data.

For any questions, please contact us at: tpmworkshop2024@gmail.com

Submission using: OpenReview.

Note: New OpenReview profiles created without an institutional email will go through a moderation process that can take up to two weeks. OpenReview profiles created with an institutional email will be activated automatically.

Important Dates

All times are end of day AOE.

  • Submission deadline: May 27th, 2024 (deadline extended)
  • Notification of acceptance: June 18th, 2024
  • Camera-ready deadline: TBA

Previous Workshops

Organizers