cum_loss4x300.txt
  LR: 0.005
  decay 0.5
    RMS prop
  2 batch size
  1000 epochs
  5 layers, 300 neurons per layer. (363,300 parameters total)
  1 input, 10 output. 
  Got it down to ~150. (worse tho)1


cum_loss_largebatch_20.txt
  LR: 0.005
  decay 0.5
    RMS prop
  20 batch size
  1000 epochs
  5 layers, 300 neurons per layer. 
  1 input, 10 output. 
  Got it down to ~150. (worse)2

  This took 78.05 seconds.


cum_loss_largebatch_50.txt
  LR: 0.005
  decay 0.5
    RMS prop
  50 batch size
  1000 epochs
  5 layers, 300 neurons per layer. 
  1 input, 10 output. 
  Got it down to ~150. 3

  This took 41.3 seconds. 

cum_loss_lr_0.0005.txt
  LR: 0.0005
  decay 0.9
    RMS prop
  50 batch size
  1000 epochs
  5 layers, 300 neurons per layer. 
  1 input, 10 output. 
  Got it down to ~25. (best result).4

  This took 38.7 seconds. 


cum_loss_lr_0.00005.txt
  LR: 0.0005
  decay 0.9
    RMS prop
  50 batch size
  1000 epochs
  5 layers, 300 neurons per layer. 
  1 input, 10 output. 
  Got it down to ~25. (best result).4

  This took 38.7 seconds. 



cum_loss_newinit.txt
  init: stddev at 2, scale whole thing by 0.01.
  LR: 0.0005
  decay 0.9
    RMS prop
  50 batch size
  2000 epochs
  5 layers, 300 neurons per layer. 
  1 input, 10 output. 
  Got it down to ~200. (best result).4

  This is now giving normally looking spectrum 

  This took 39.7 seconds. 


cum_loss_good.txt
  init: stddev at 0.1. Don't scale anything.
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  2000 epochs
  5 layers, 300 neurons per
  1input, 10 output
  Got it down to ~2.0
  This is giving good spectrum (taking residuals)
  This took 40seconds.

cum_loss_dev0.5.txt
  init: stddev at 0.5. Don't scale anything.
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  2000 epochs
  5 layers, 300 neurons per
  1input, 10 output

  Got it down to ~3.5
  This is giving good spectrum (taking residuals)
  This took 40seconds.

cum_loss_dev0.1scaled.txt
  init: stddev at 0.1. scaled by 0.5
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  2000 epochs
  5 layers, 300 neurons per
  1input, 10 output

  Got it down to ~2.5
  This is giving good spectrum (taking residuals)
  This took 40seconds.

cum_loss_lr_0.000005.txt
  init: stddev at 0.`. Don't scale anything.
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  2000 epochs
  5 layers, 300 neurons per
  1input, 10 output

  Got it down to ~1000
  This took 40seconds.

cum_loss_longrun.txt
  init: stddev at 0.1.
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  3000 epochs
  5 layers, 300 neurons per
  1input, 10 output

  Got it down to ~1.6
  This took 40seconds.

cum_loss_small.txt
  init: stddev at 0.1.
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  2000 epochs
  5 layers, 100 neurons per
  1input, 10 output

  Got it down to ~150
  This took 11seconds.

cum_loss_small10000.txt
  init: stddev at 0.1.
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  10000 epochs
  5 layers, 100 neurons per
  1input, 10 output

  Got it down to ~.276
  This took 56seconds.

cum_loss_3x50.txt
  init: stddev at 0.1.
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  10000 epochs
  3 layers, 50 neurons per
  1input, 10 output

  Got it down to ~13.3
  This took 32seconds.

cum_loss_3x50_long.txt
  init: stddev at 0.1.
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  50 000 epochs
  3 layers, 50 neurons per (5,100)
  1input, 10 output

  Got it down to ~.11
  This took 32seconds.


cum_loss_5x20_long.txt (overwritten)
  init: stddev at 0.1.
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  50 000 epochs
  5 layers, 20 neurons per (1,820)
  1input, 10 output

  Got it down to ~.23
  This took 145seconds.
  We are going to use this network. 


cum_loss_5x20_long.txt (rerun)
  init: stddev at 0.1.
  LR: 0.00005
  decay: 0.9
    RMS prop
  50 batch
  50 000 epochs
  5 layers, 20 neurons per (1,820)
  1input, 10 output

  Got it down to ~1.06
  This took 144seconds.
  We are going to use this network. 
  These weights are saved as the copy versions for backup. 
  Cool beans. It works!


  Now I need to generate the y data that is fixed. 


large_complex_net_5x20.txt
  Used larger dataset
  init: stddev at 0.1
  LR: 0.00005
  decay: 0.9
    RMS prop
  200 batch
  100,000 epochs
  5 layers (20 neurons per) (1,820)
  1 input, 100 output

  Got it down to ~200
  This took 4538.9797399seconds.
  We are now going to test it.