regarding use this framework for my semantic segmentation work
surfreta opened this issue · 9 comments
Hi,
I have several questions regarding using this library
-
If the studied data set is of totally different domain with the typical benchmark set, such as PASCAL VOC,
What should be the right pipeline of using your framework. Can I still use the pre-trained model(weight) and re-train the model using my dataset? -
The problem I am studying is of limited number of images, each of this image has large sizes, i.e., 4096 * 4096 pixels. The masked area
is about 5%~10% areas of each image.I have been thinking of generating large samples of training set from these large images, each training image
is of 128 * 128. In other words, building a model based on 128 * 128.
During testing stage, conduct the sub-frame prediction (each sub-frame is 128*128) over the test image, and stitch these predicted mask together.
Is this the right approach?
Besides, are there any suggestions on generate those training set?
I'm also very interested in any advices for generating a new dataset to be trained.
I will load an example of usage for different dataset that I have done recently.
Small number of images is usually a problem.
Reusing pretrained weights won't make it worse I think.
At least, all of the works that I have seen before, use pretrained weights.
Let me know if it helps.
Don't know if I understood it right but will you upload a example of how to re-train the model with a new dataset ?
If i'm right, that will be awesome !
@MrChristo59 , yeah, that is what I meant :)
Looking forward to it.
Just a little question to be sure I'm right.
For creating a dataset to be trained for segmentation, you need an image and a another one with the mask of what you want to learn. I guess the color of the mask will define the type of class it will refer to.
Am I right ? If yes, are there any advice on the proper way to do this mask (border size and color...)
Thanks
Hey Dannill,
Did you release the exemple yet ? Don't know if it's on your blog or on the git.
Hey @warmspringwinds Did you upload any example of training a new dataset for your scripts? I am trying to train a new dataset which less number of images around 250 but facing the error of OutOfBound Error as listed in the issues. Could you help resolve this problem?
Thank you @warmspringwinds for this suggestion. I want to use FCN32s model for segmentation purpose initialized by VGG16. After going through some of your files what I understood is the script pascalvoc.py in dataset makes use of PASCAL 2012 and Berkeley Pascal dataset which you mentioned in this repository as well. I can substitute the root path to my dataset and it works similar to generating tfrecords by using getannotationpairs methods in utils.pascal_voc.py. What I couldnot understand is where is the explicit example to use different dataset? I am Sorry I am just new to Deep Learning using CNN.