google-research/lottery-ticket-hypothesis

Getting activations instead of weights

ZohrehShams opened this issue · 0 comments

I'm trying to get the activation values of neurones for each layer and store them. I have tried many variations of dense_layer method in mode_base, so that I can store activations in a dictionary, just like the way weights are stored for each layer. But given that unlike weights the dimension of activation depends on the batch size and therefore unknown before training, all my attempts have failed - including those that define a dynamic size tf variable. Is there any other way of layer-wise activation extraction that I can try? Thanks.