New Wavelet
oykueravci opened this issue · 5 comments
Hi,
Is there any way to implement a new wavelet with designed coefficients? I designed a new wavelet for my school project and I want to implement this wavelet using Wavetf.
Thank you!
Hi @oykueravci ,
to implement a new kernel you need to derive the base classes contained in _base_wavelets.py
(e.g., DirWaveLayer1D
), as it is done, e.g., in _daubachies_conv.py
. Deriving a new direct transformation is quite straightforward, whereas implementing a correct inverse transform can be trickier (see the article linked in the README for a detailed example). If you want some examples of a general size Daubechies transformation, which accepts coefficients as input, have a look at the general_kernel
branch, and in particular at wavetf/_general_conv.py
and examples/gen_1d_wavelet.py
.
Thank you for the response! I hope I can handle it!
Hey, I implemented an autoencoder structure like this using Wavetf. You said it will be trickier so I'm doubting this because the model is training. Why do we call db3 until wave4? I kinda feel lost here...
ker_db3 = [
[0.3326705529509569, 0.035226291882100656 ],
[0.8068915093133388, 0.08544127388224149 ],
[0.4598775021193313, -0.13501102001039084 ],
[-0.13501102001039084, -0.4598775021193313 ],
[-0.08544127388224149, 0.8068915093133388 ],
[0.035226291882100656, -0.3326705529509569 ], ]
ker_db3 = tf.convert_to_tensor(ker_db3, dtype=tf.float64)
db3 = KerWaveLayer1D(ker_db3)
invdb3 = InvKerWaveLayer1D(ker_db3)
This is encoder block,
def EncoderMiniBlock(inputs,kernel_size, n_filters=32, dropout_prob=0.3, wavelet=True, wave_kern="db2"):
if wavelet:
wave0 = inputs
wave1 = db3.call(wave0[:, :, :])
wave2 = db3.call(wave1[:, :, :])
#wave3 = db3.call(wave2[:, :, :])
#wave4 = db3.call(wave3[:, :, :])
#waves = [wave1, wave2, wave3, wave4]
#next_layer = WaveTFFactory.build(wave_kern,dim = 1)(conv)
conv = Conv1D(
n_filters,
kernel_size, # Kernel size
activation="relu",
padding="same",
kernel_initializer="HeNormal")(wave2)
conv = BatchNormalization()(conv, training=False)
if dropout_prob > 0:
conv = tf.keras.layers.Dropout(dropout_prob)(conv)
return conv
This is decoder block,
def DecoderMiniBlock(inputs,kernel_size, n_filters=32,inv_wavelet=True,wave_kern='db2'):
if inv_wavelet:
wave0 = inputs
wave1 = invdb3.call(wave0[:, :, :])
wave2 = invdb3.call(wave1[:, :, :])
#wave3 = invdb3.call(wave2[:, :, :])
#wave4 = invdb3.call(wave3[:, :, :])
#next_layer = WaveTFFactory.build(wave_kern,dim = 1, inverse=True)(conv)
conv = Conv1DTranspose(
n_filters,
kernel_size, # Kernel size
activation="relu",
padding="same",
kernel_initializer="HeNormal")(wave2)
return conv
This is the autoencoder,
class AnomalyDetector(Model):
def init(self):
super(AnomalyDetector, self).init()
self.encoder = tf.keras.Sequential([
#layers.InputLayer(input_shape=(256,1)),
EncoderMiniBlock(n_filters=32,kernel_size=2,dropout_prob=0,wave_kern='db2'),
EncoderMiniBlock(n_filters=32*2,kernel_size=2,dropout_prob=0,wave_kern="db2"),
EncoderMiniBlock(n_filters=32*4,kernel_size=2,dropout_prob=0, wave_kern="db2"),
#layers.Conv1D(1, 2, activation="relu", padding="same", kernel_initializer="he_normal"),
layers.Dense(1,activation='relu') ])
self.decoder = tf.keras.Sequential([
DecoderMiniBlock(n_filters=32*4,kernel_size=2,dropout_prob=0,wave_kern="db2"),
DecoderMiniBlock(n_filters=32*2,kernel_size=2,dropout_prob=0, wave_kern="db2"),
DecoderMiniBlock(n_filters=32,kernel_size=2,dropout_prob=0, wave_kern="db2"),
layers.Dense(1,activation='relu')
#layers.Conv1D(1, 2, activation="relu", padding="same", kernel_initializer="he_normal") ])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = AnomalyDetector()
autoencoder.build((None,256,1))
optimizer = tf.keras.optimizers.Adam(lr=1e-3)
autoencoder.compile(loss = 'mean_absolute_error',
optimizer = optimizer)
Hi @oykueravci
-
If you're just interested in using Daubechies wavelets then you don't need to implement a new one, since it is already available in the
general_kernel
branch. The tricky part is if you want to extend to other, non-Daubechies wavelets (or if you want to use large kernels, because of numerical instability) -
The 2D wavelet (invertibly) transforms a tensor into 4 smaller tensors. How to augment networks with this transformation depends on your application. E.g., in the
mini_cnn.py
example you're looking at the wavelet coefficients of the original image are simply concatenated to the computed features every time they halve their dimensions (which happens 4 times in the network). Conversely, in themini_unet.py
example a wavelet layer is used in place ofMaxPooling2D
, to reduce the feature dimensions without losing information (but also without selecting the "strongest" features).
Actually, the way I wanted to go was to use a different wavelet and have the same function as MaxPooling2D to reduce the feature dimensions but I first tried with Daubechies level 3 to experiment. Anyways thank you for the response again have a great day!