[enhancement] Consistent parameter naming
Opened this issue · 3 comments
TNick commented
A network with three maxout.MaxoutLocalC01B
, a maxout.Maxout
and a mlp.Softmax
prints Parameter and initial learning rate summary
like so:
W: 0.0025
b: 0.0025
W: 0.0025
b: 0.0025
W: 0.0025
b: 0.0025
layer_4_W: 0.05
layer_4_b: 0.05
softmax_b: 0.05
softmax_W: 0.05
This is all but consistent. I like the layer.name + '_' + W/b
notation.
It is already used in most parts of the library, as grep -R "\.name = "
shows.
Would you accept a PR with that? If not, please suggest other naming convention and I would be happy to write a PR.
lamblin commented
Yes, that looks like a good idea, a PR would be welcome.
Thanks!
TNick commented
softmax_b
and softmax_W
too?
$>grep -R softmax_b
models/dbm/layer.py: self.b = sharedX( np.zeros((n_classes,)), name = 'softmax_b')
models/mlp.py: name='softmax_b')
scripts/tutorials/convolutional_network/convolutional_network.ipynb: "\tsoftmax_b: 0.00999999977648\n"
scripts/tutorials/jobman_integration.ipynb: "\tsoftmax_b: 0.000205\r\n",
scripts/tutorials/multilayer_perceptron/multilayer_perceptron.ipynb: "\tsoftmax_b: 0.00999999977648\n"
scripts/tutorials/multilayer_perceptron/multilayer_perceptron.ipynb: "\tsoftmax_b: 0.00999999977648\n"
scripts/tutorials/stacked_autoencoders/stacked_autoencoders.ipynb: "\tsoftmax_b: 0.0500000007451\n"
$>grep -R softmax_W
models/dbm/layer.py: self.W = sharedX(W, 'softmax_W' )
models/mlp.py: self.W = sharedX(W, 'softmax_W')
scripts/tutorials/convolutional_network/convolutional_network.ipynb: "\tsoftmax_W: 0.00999999977648\n"
scripts/tutorials/jobman_integration.ipynb: "\tsoftmax_W: 0.000205\r\n"
scripts/tutorials/multilayer_perceptron/multilayer_perceptron.ipynb: "\tsoftmax_W: 0.00999999977648\n"
scripts/tutorials/multilayer_perceptron/multilayer_perceptron.ipynb: "\tsoftmax_W: 0.00999999977648\n"
scripts/tutorials/stacked_autoencoders/stacked_autoencoders.ipynb: "\tsoftmax_W: 0.0500000007451\n"
JesseLivezey commented
I think consistency everywhere would be welcome.