kennethsinder/oct-opus

Experiment with pix2pix code

Closed this issue · 3 comments

Stretch our current pix2pix cGAN baseline in a few different directions. This will involve:

  1. Learning more (and hopefully documenting your learning) about how our current neural net works. (Explaining in terms of some of the block diagrams, other diagrams, etc. included in the pix2pix tutorial may help you structure this documentation.)
  2. Changing high-level things like learning rate of the Adam optimizer (e.g. 5e-4 vs. 2e-4) and seeing how that changes training running time and quality of our results. I don't expect any changes deep within the layers at this stage but just getting a feel for some of the things that are affecting the quality of our results at a higher level as much as possible would be great.
pl3li commented

I think everybody should be trying to learn more about pix2pix and experimenting with the code, no?

In retrospect, this issue is too vague and is probably behind us in the sense I intended in the issue description :) Closing.