xadrianzetx/coral-deeplab

Inference Script

Closed this issue · 11 comments

Thank you for providing this repo. I think I was able to compile a model for the coral successfully, and I tried to use the segmentation inference code provided by google: https://github.com/google-coral/pycoral/blob/master/examples/semantic_segmentation.py
However the output from that inference did not make much sense, especially compared to the output from using one of the models provided by google.

Model from google:
segmentation_result

Compiled model:
custom_segmentation_result

Is there a reason this inference script will not work with models from this repo? Would you happen to have an inference script that would work better?

Hi @bsteiner-dandy

Looking at the script you provided, there are few things that are different:

  • Output tensor size in google model is 513x513 while ours is 33x33. The reason is we do not include final upsampling layer in the model, as this op was not executing on TPU anyway. Try resizing output tensor back to image shape after inference.
  • The script is missing some preprocessing steps. Make sure pixel values are scaled to range 0-1 (e.g. by multiplying input tensor by 1/255)
  • Make sure to apply zero point and scale transformations. In our case this means input_tensor / interpreter.get_input_details()[0]["quantization"][0]

I'll try to publish proper example script later if there's time.

Thank you for the response @xadrianzetx. I'm having trouble applying all of your suggestions, if you are able to publish an example script that would be greatly appreciated

No worries, I'll prepare one over the weekend.

Hi @bsteiner-dandy

I've added example script in #19 based on the one you posted. Looks correct now.
doggos

Thanks @xadrianzetx ! Unfortunately I am not able to install some of the dependencies this requires on the Coral Dev Board, trying to work through those issues now. For example: Could not find a version that satisfies the requirement tensorflow==2.4.0

See #12 for lightweight install of coral deeplab.

Great, that works! One last question hopefully, I am trying to run a model that I compiled, to ensure the full pipeline of training to inference. I am compiling a model using functions from your test code:

model = cdl.applications.CoralDeepLabV3Plus(weights='pascal_voc')
datagen = fake_dataset_generator((513, 513, 3), 10)
stdout = quantize_and_compile(model, datagen)

I tried to run this compiled model like this:

    model = "/home/mendel//segmentation_edgetpu.tflite"
    interpreter = tflite.Interpreter(
        model, experimental_delegates=[tflite.load_delegate("libedgetpu.so.1")]
    )

However the output I get is not what I would expect from Pascal VOC weights. Is there a different way I should be loading a compiled model from disk?

Looks like you're doing it correctly. Maybe try regular CoralDeepLabV3 version first?

That did the trick! Thank you!

Arg, I made a mistake, it was not actually using the CoralDeepLabV3 model but instead the cdl.pretrained.EdgeTPUModel.DEEPLAB_V3_DM1 model. Do you have any other ideas I could try? Or would you be able to provide a script that shows how that model was trained and compiled?

So if I understand correctly, you compiled regular V3 with pascal_voc weights and resulting .tflite is not comparable to precompiled model from this repo? You're correct, it does not seem right. Could you open separate issue and provide script you're using to compile the model? I'll investigate that.