Mini site for the "Advanced probabilistic modeling: from generative to neuro-symbolic AI" course for PhD students at the Unvesity of Trento 23/24
Every Mon-Fri, from 9.30am to 11.30am in room Garda (Povo1, second floor).
- lecture #1 [04/03/24]
- slides: Intro to (advanced) probabilistic reasoning (Part I)
- recap of probability theory and basics
- lecture #2 [05/03/24]
- slides: Intro to (advanced) probabilistic reasoning (Part II)
- more on normalizing flows
- code snippets to implement DGMs in Jakub's blog
- lecture #3 [06/03/24]
- slides: Circuits: representation and inference (Part I)
- companion paper on PCs
- lecture #4 [07/03/24]
- slides: Circuits: representation and inference (Part II)
- further reading: composing tractable inference routines
- lecture #5 [08/03/24]
- slides: Learning Circuits
- further reading: tensorizing circuits and a survey on learning
- lecture #6 [11/03/24]
- slides: Intro to Probabilistic Neuro-Symbolic AI
- further reading: prob & fuzzy NeSy
- lecture #7 [12/03/24]
- slides: on enforcing constraints with provable guarantees
- further reading: deepproblog
- lecture #8 [13/03/24]
- slides: (generative) knowledge graph embedding models
- guest lecture by Stefano Teso on "Reasoning-shortcuts in NeSy AI"
- lecture #9 [14/03/24]
- Groups 1, 6, 2 presenting
- lecture #10 [15/03/24]
- Groups 3, 4, 5 presenting
Each student will be evaluated (score 0-10) for their participation in class (10%) and for a their final project (90%), to be done in teams. All students in a team (max 4) will be asked to do a short presentation (20 mins + 10 mins QA) as a final project about one or more papers or past projects from their research experience. Every presentation shall critically discuss the proposed paper/project under the light of what has been presented in the course.
Possible aspects to highlight and discuss in your presentation include
- which probabilistic reasoning task is tackled?
- what is challenging w.r.t. reasoning and learning?
- which step of the reasoning pipeline is tractable? (and why?)
- which step is intractable instead? (and why? can it be made tractable?)
- can you improve tractability and/or quality of the approximations?
Tell Antonio who your team is, by the end of lecture #2.
Each team can select one or more papers from the list below, propose a paper that is not in the list, or simply present some unpublished work.
- Graph Mixture Density Networks
- Complex Query Answering with Neural Link Predictors
- Tractable Control for Autoregressive Language Generation
- Image Inpainting via Tractable Steering of Diffusion Models
- Hierarchical Decompositional Mixtures of Variational Autoencoders
- Faster Attend-Infer-Repeat with Tractable Probabilistic Models
- Neural Probabilistic Logic Programming in Discrete-Continuous Domains
- Lossless Compression with Probabilistic Circuits
Feel free to discuss lectures and further topics in probabilistic reasoning in our google group. Alternatively, send an email to Antonio for any question about the course organization.