pyro-ppl/pyro

Normalizing Flows Examples

emited opened this issue · 19 comments

Hi,

It would be nice to have some go to examples for training normalizing flows, perhaps reproducing results on toy datasets (e.g. the ones implemented here). For now, from current state of things, it is not very clear how to proceed.

What do you think? @stefanwebb

Would be happy to help.

Hi @emited, thanks for your interest in NFs! I'm glad you've found the code useful.

I absolutely agree, it would be great to have a tutorial on using NFs for the examples, so users can understand what NFs are, how they can benefit from them, and how to use the library.

I've got some other things occupying me for the next two/three weeks and was planning on returning to the NF part of Pyro then. But in the meanwhile, you would be very welcome to design and submit a tutorial, e.g. showing how to learn simple toy datasets like in the link. Or perhaps we can work on it together when I am available again if you're interested?

We would then extend this tutorial showing how to use conditional NFs for amortized inference after my most recent PRs have been cleared, perhaps with some examples on deep generative models + easyguides.

Another avenue for contribution if you're interested is implementing neural ODES/FFORJD as a Pyro/PyTorch transform and integrating into the NF library

@stefanwebb thanks a lot for your reply, working together to make a tutorial on NFs in pyro seems like a great idea. Maybe we can discuss this more in detail somewhere else.

For starters, I was thinking about implementing some simple tests, in order to guarantee that the implemented models are able to model some very simple distributions, like they should (maybe you already have something like this?). What do you think?

I am currently implementing a simplified version of neural ODES/FFJORD in Pyro, so would be happy to discuss this too.

some sort of tutorial would be great!

i would probably advise against actual tests, however. this is because even if the flow implementation is correct, it can be tricky to fit even quite "simple" distributions. in many cases it will just be a matter of hyperparameter tuning. i suspect that in most cases where authors of flow papers show pretty pictures, a lot of tuning went into it. (so these methods are never quite as blackbox/magical as advertised).

for that reason i think it makes more sense to stick a flow in an actual model, whether that be a VAE or something else. there one can demonstrate improved test log likelihoods, which, depending on the application might actually be useful. probably more useful than pretty pictures.

@emited I agree with Martin here - unit tests wouldn't be that useful. But I think learning toy distributions is useful though in a tutorial for illustrating how to use the API. I would start off with that, then have a second section that shows how to use them in guide programs to improve VI, for VAEs/AIR/traditional Bayesian models etc.

Yes, let's take this discussion offline. Could you send me an email please (my address is on my GitHub profile)?

This seems like a valid argument. But it would be important to be able to reproduce results on the toy datasets at least, with a bit of parameter tweaking of course. This could then be included in the tutorial.

I've just sent you an email !

Yeah, Normalized Flows would be super useful with VAE examples like in bjlkeng's article: http://bjlkeng.github.io/posts/variational-autoencoders-with-inverse-autoregressive-flows/

I recently had to re-write my code from PyTorch to Pyro to support many NF families, so I made a documented Jupyter notebook to fit a very simple bivariate mixture of two Gaussians using any flow from Pyro.

Link to notebook

The code might not be optimal, so comments/feedback appreciated!

Hey Guys,
Did you make these tutorials by any chance?
Thanks

Hi @saeed1262, these are in the pipeline and hopefully I will get a chance to finish them soon.

I'm planning on doing a three part tutorial:

  • Part 1 covering the API and learning simple distributions
  • Part 2 covering reproducing SoTA results from the Neural Spline paper
  • Part 3 covering using normalizing flows for flexible variational inference in Pyro

Hi @stefanwebb ,
Thanks for the quick reply.
That would be great.
We are also planning to set up an NF tutorial class at YorkU. So if you are interested I can share my stuff with you when they are ready.

Do you have an approx date for each part?

I’ll have the first part submitted for review as a PR before Friday this week, but can’t commit to a timeframe for the other two yet

Great. Thanks

@saeed1262 sorry for the delay! See #2542 for a draft of the first tutorial (feedback welcome)

Looking forward to the second part.

@heroxbd normalizing flows work has moved to https://flowtorch.ai

@heroxbd normalizing flows work has moved to https://flowtorch.ai

Hi, does this mean that Pyro will drop normalizing functionality?

@fritzo what do you think about this?

@maulberto3 I just wanted to mention that I had to suspend FlowTorch development at Meta after the economic climate changed earlier in the year. However, I am able to resume work on it now!

@stefanwebb I know what you are saying, and it's not nice, I wish all of you the best. For what's it's worth, I still haven't tuned a simple mnist NF example of mine here or with pyro, maybe soon. Any help you need with this, let me know.

what do you think about [Pyro dropping normalizing flow functionality]?

Pyro tries to avoid changes that would break user code. Thus if we do drop maintenance for normalizing flows, we'd at most add a DeprecationWarning or FutureWarning; I do not foresee us removing any of the existing flows from Pyro. @stefanwebb if you plan to maintain flowtorch going forward, then feel free to add DeprecationWarning or FututreWarning to Pyro's flows, including links that point to the corresponding flows in flowtorch.