melmikaty/3D_CNN

The input data shape

Closed this issue · 7 comments

Hi Mohamed,
Can you tell me the format of datasets ? and i am doubt the shape of data.
In the code ,you have reshape image to [h,w,d], but the the filter of tf.nn.conv3d() is [d,h,w].
img3_obj.img3_uint8 = tf.reshape(tf.slice(record_bytes, [label_bytes],[img3_bytes]),[img3_obj.height,img3_obj.width,img3_obj.depth, img3_obj.nChan])

tf.nn.conv3d(input, filter, strides, padding, name=None), for the Shape filter [filter_depth, filter_height, filter_width, in_channels, out_channels].

I want to know is there any conflict? because my model has bad result, I cant find the reason.

tr4B commented

Hi kexianer, I'm trying to build the classification for 3D shape using tensorflow.
How can I do it?
What kind of 3d dataset did you use to implement this code?

Hi Kexianer,
The data I was working on was saved in the format of binary files where the first byte in each row corresponds to the label and the following bytes contain the 3D image data (height, width, depth, channels).
As for the order of height, width, depth, I was working on 3D volumes in the 3D space wherein the order was not important. You can modify the code to suite your data!
Cheers,
Mohamed

Hi @tr4B,
I use CT data ,which was processed to binary files.
This code is easy to use, thanks for the share by melmikaty by the way.
good luck

@melmikaty Hi,
yes, the shape doesn't have important influence.
I just need 2 clasifications, when I use the model described in the code(conv1+conv2+fcon3+fcon4), the result is strange:firstly, I found when I test our validation set using different batchsize, I got different result, such as [batchsize=128or64,accuracy = 94%] [batchsize=1,2,8, accuracy = 55%-60%] ; and then try the same 128 examples (1 example repeat 128 times) with batchsize=1,128, all wrong. Another experiment, batchsize=128, [all positive example, acc=71%];[all negative example, acc=71%], [positive:negative=1:1,acc=91]
So, I think the data distribution within a batch has a large effect on the results,I wander if you have encountered this problem, and what ideas and suggestions to me?
And I fund the two norm layer is in different place (after conv1, and after pool2), is this cause my problem?
Thank you in advance!
KeXianer

Hi @kehaozhe ,

Generally speaking batches should be balanced, i.e, they should contain equal number of positive ad negative samples (in a binary classification problem or, an equal number of instances of different classes in a multi-class classification problem) unless, for some reason, you would like to give a higher weight to one class than the other. For the batch size, the smaller the batch size, the smaller the memory required, but the less accurate the estimated gradient. I don't reckon that the normalisation layer causes this problem as it just scales and shifts the distributions of the activations.
Cheers,
Mohamed

Hi @melmikaty ,
After remove the l2 norm layer, it become the validation and testing are back to normal, get accuracy >95% both.
In my opinion, tf.nn.l2_normalize(...,dim=0,...) , dim=0 means it normalize each batch data, so data distribution within a batch has a large effect on the results. When the test data distribution and training is similar, I can get batter result. but when I do prediction , I send one image into the model, it is not similar with training, So I get bad result.
Do you have try to predict new data?
Best Wishes!
Ke

@kehaozhe
It's recommended to have the same parameters for the training and predicting networks. However, if you want to predict one image at a time (batch size equals one), you can train the network using a relatively large batch size and then use this trained network to initialise a new network with a training batch size of one. I hope this helps...
Cheers,
Mohamed