Melspectrogram cant be set 'trainable_fb=False'
zhh6 opened this issue ยท 8 comments
Melspectrogram cant be set 'trainable_fb=False',after I set trainable_fb=False,trainable_kernel=False,but seems like it doesnot work.It is still trainable.
It's a problem with tf.keras the first solution that comes to mind is to declare MelSpectrogram and set layers as not trainable
model = Sequential()
model.add(Melspectrogram(sr=SR, n_mels=128,
n_dft=512, n_hop=128, input_shape=input_shape,
return_decibel_melgram=True,
trainable_kernel=False, name='melgram',trainable_fb=False))
for layer in model.layers:
layer.trainable = False
you can verify with
model.non_trainable_weights
Output:
[<tf.Variable 'melgram/real_kernels:0' shape=(512, 1, 1, 257) dtype=float32>,
<tf.Variable 'melgram/imag_kernels:0' shape=(512, 1, 1, 257) dtype=float32>,
<tf.Variable 'melgram/Variable:0' shape=(257, 128) dtype=float32>]
First two are part of Spectrogram Class [Kernel] and Variable:0 is [fb] due author forgot to set a name for it. you should be able to set as non trainable any of them separately
@zhh6 Could you provide a test code that checks it like we have in tests
? I can't spend a lot of time updating Kapre
these days but can try something small. I guess it'd be fixed if we follow the new guide, i.e., something like
self.b = self.add_weight(shape=(units,),
initializer='zeros',
trainable=True)
Thanks a lot. I have solved the problem.Just like you do that.
m = Melspectrogram(sr=SR, n_mels=128, n_dft=512, n_hop=128, input_shape=input_shape, return_decibel_melgram=True, trainable_kernel=False, name='melgram',trainable_fb=False) m.trainble = False
Why not to use only tf.Variable(..,trainable=False) however my question is ...@keunwoochoi why did you decide to set those variables as "weights" trainable or not? Shouldn't they be fixed?
That's because one might want to set it trainable to optimize the time-frequency representation.
This would be one of the things I'd like to fix wrt tf2.0
as well as using tf.stft
. I can also quickly update the code to use self.add_weight()
, but that should come with stop supporting tf 1.0
(which I'm fine with).
I'm trying to update using tf.stft ... however I think it's going to be impossible set kernels as trainable in that scenario... In fact Spectrogram class would be very short... Do you have any idea or recommendation to that update?
that'd be great, thanks! and you're right - with tf.stft, one can't it trainable.
i just made https://github.com/keunwoochoi/kapre/tree/kapre-0.2
. this is to aggregate changes towards kapre 0.2
which would drop tf 1.0 completely. so, for that kind of change, please make a PR to this branch and i'll later merge all of them to master
at once.
besides, the PR should be just typical - with some unit tests, which we can now compare with librosa.stft :)