google/prettytensor

Scope names

rlrs opened this issue · 3 comments

rlrs commented

How do I not create create a scope for new variables? I've looked into the code, but I simply cannot see how .variable() creates a new scope for the variable.
This is undesired behavior for me, since I want to use the current scope that I assigned with tf.variable_scope() to share some variables.

The new variable_scope is created when entering the method (it is buried in
the registration code, which is quite hairy to read).

The general way to share variables is to do the following pattern:

with tf.variable_scope('my_scope') as vs:
build_model()

with tf.variable_scope(vs, reuse=True): # could also enter 'my_scope'
build_model()

You can do this with any sub-part as well. If you want to share variables
across different layer types, you can supply the same name to the layer or
do something like the following:

Getting the shapes to line up is tricky, but possible.

x = single_patch.fully_connected(...)

y = full_image.conv2d(..., init = x.layer_parameters['weight'],
bias_init=x.layer_parameters['bias'])

On Sat, Apr 2, 2016 at 7:23 AM, Rasmus notifications@github.com wrote:

How do I not create create a scope for new variables? I've looked into the
code, but I simply cannot see how .variable() creates a new scope for the
variable.
This is undesired behavior for me, since I want to use the current scope
that I assigned with tf.variable_scope() to share some variables.


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
#18

rlrs commented

I see, I am indeed sharing variables across different layer types - namely conv2d and deconv2d (which is a custom layer). Does initializing in that way actually share the variables though? It seems to me that once x_layer parameters are updated, y_layer won't be.

Yes it does. Under the hood it is just reusing the variables and the gradients are computed correctly.