yscacaca/DeepSense

Hi can you explain about this kernal size in the following line .

shamanez opened this issue · 6 comments

The size of acc_inputs is (BATCH_SIZE, WIDE, INTER_DIM/2, 1), where INTER_DIM/2 = SEPCTURAL_SAMPLES32 (3 means x,y,z axises of the sensor input, and 2 means the real and imaginary parts of the FFT result). Then every 6 values on acc_inputs[2] correspond to a single element. Therefore, we use kernel_size = [1, 23CONV_LEN] and stride = [1, 2*3] for the first convolutional layer.

Btw, you can also reshape acc_inputs into (BATCH_SIZE, WIDE, SEPCTURAL_SAMPLES, 6). Then you can have the first convolutional layers with kernel_size = [1, CONV_LEN] and stride = [1,1].

Thanks . Can we represent FFT results as Phase and angle unlike complex numbers ?

Yes, it should be okay.

Another thing. Here we use same convolutional Neural Nets in each time steps. How we train the parameters of convolutional neural nets ? Is that each time step we take the derivatives and joinly update them ?

Yes, it just does the similar thing as rnn, calculating the gradient with backpropagation through time.

perfect. :)