rishizek/tensorflow-deeplab-v3

Running inference in C++

samhodge opened this issue · 8 comments

Hi

I have experimented with your model and I am currently training with it on my local machine, which will be a time consuming affair.

Before I started that I ran a test with inference on an image sequence from some holiday footage with a group of humans in it, I was really impressed with the segmentation quality I got out of your trained model weights.

Now I want to run that model in a C++ application.

I have got it working already with the google model as they have provided a frozen_graph.pb file which makes everything very straight forward, you just choose the inputs and output and as it to run and deal with the index int64 to rgb in your C++ code and can produce segmentation maps with ease.

But the model you have provided has a better quality.

So looking at your model while training in comparison to the google one with tensorboard I can see the following input:

IteratorGetNext
Operation: IteratorGetNext
Attributes (2)
output_shapes
{"list":{"shape":[{"dim":[{"size":-1},{"size":513},{"size":513},{"size":3}]},{"dim":[{"size":-1},{"size":513},{"size":513},{"size":1}]}]}}
output_types
{"list":{"type":["DT_FLOAT","DT_INT32"]}}
Device
/device:CPU:0

and

the output, I might be wrong here

softmax_tensor
Operation: Reshape
Attributes (2)
T
{"type":"DT_FLOAT"}
Tshape
{"type":"DT_INT32"}

But where I get a bit lost is how you get from the softmax_tensor to the RGB semantic segmentation, because the trickery done in Python needs to be C++ equivilantly.

Also I am not seeing the images while I am training in tensorboard like on the README.md

Currently my model says:

INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 1 into model/model.ckpt.
INFO:tensorflow:cross_entropy = 3.6564944, train_px_accuracy = 0.0064560226, learning_rate = 0.007, train_mean_iou = 0.0025944803
INFO:tensorflow:loss = 25.835169, step = 1
INFO:tensorflow:cross_entropy = 1.655007, train_px_accuracy = 0.35825747, learning_rate = 0.0069979005, train_mean_iou = 0.034966677 (889.482 sec)
INFO:tensorflow:cross_entropy = 1.2607722, train_px_accuracy = 0.4497315, learning_rate = 0.0069958004, train_mean_iou = 0.07742665 (905.908 sec)
INFO:tensorflow:cross_entropy = 1.6334174, train_px_accuracy = 0.48777512, learning_rate = 0.0069937008, train_mean_iou = 0.09395798 (911.375 sec)
INFO:tensorflow:cross_entropy = 1.1489747, train_px_accuracy = 0.5459064, learning_rate = 0.006991601, train_mean_iou = 0.107514605 (934.927 sec)

So I am going to be here for some time waiting for the model to converge.

Any hints of how I can get a model with more closely resembles the one found here:

https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md
In structure now so much in its weights values.

I have that working perfecting in a command line C++ driver, which I can put into my parent application.

But your model produces better results albeit from Python.

I think I can handle freezing the graph to a .pb file on my own, I just need to know where to cut the cord.

sam

Sorry disregard the comment about the tensor board I was running tensorflow 1.7 when I downgraded to 1.6 and got GPU working things are going a lot smoother now, but my GPU can only fit a batch size of one, I might be able to find access to a 16Gb GPU later, currently I only have 4Gb available.

Hi @samhodge , thank you for your interest in the repo.

Some part of your questions are a bit not clear for me but I try my best (Sorry for my English comprehension skill. I'm not native English speaker).
I started implement the model before Google open source deeplab code, so there should be many differences. And as you probably know, their repo implements deeplabv3+ but this repo deeplabv3. Also their pre-trained model is better for training data wise, because their pre-trained is trained with coco dataset as well as Imagenet but mine is only Imagenet.

how you get from the softmax_tensor to the RGB semantic segmentation

This line converts [HxWx1] numpy array to [HxWx3] RGB image using decode_labels() function.

I hope I answered some of your question.

Thanks for your response, I will see what I can do with the model you have provided and the information provided.

Thanks again

Sam

@samhodge Hi buddy,I want to run the google Deeplab V3+ module in a C++ application use frozen_graph.pb,but I got many error,I think it caused by the image processing stage.Now I found you have got it working. Would mind share the code?

Hi @samhodge @shuiqingliu , I am also trying to use deeplab in a c++ application. Would either of you mind sharing the code you used to get it running?
where did you get the frozen_graph.pb file from?

Thank you very much!

Freezing the graph is the same as freezing any other graph in tensorflow, same as running it in C++, if there were specific questions you have I am happy to help you out.

Hi @samhodge @shuiqingliu , I am also trying to use deeplab in a c++ application. Would either of you mind sharing the code you used to get it running?
where did you get the frozen_graph.pb file from?

Thank you very much!

you through this tool to get the frozen_graph file

Ok thank you! are you able to share the code you use to run inference using deeplab in c++? Thanks again.