google-research/deeplab2

Guide for standalone inference of Deeplab2?

dheera opened this issue · 2 comments

Hi,
Is there any chance a guide could be included for:
(a) Fastest path to clone the repo, run an inference on a regular JPEG image using a pretrained model, without converting to TFrecords? I was flip-flopping between Installation, Getting Started, Cityscapes, and Panoptic documentation pages and couldn't actually get it started, kept getting stuck at multiple places, including which CUDA version, which cuDNN version, custom ops would have compile error so I skipped it, environment variables, setting the VAL_SET directory, then now it says tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ${INIT_CHECKPOINT} -- I have a checkpoint downloaded and not sure why it doesn't find it (not sure where it is looking?) but I'll probably have to dig through some more code to fix this.
(b) Ideally, to do the above without modifying environment variables or configurations? Or if "Installation" is required, how to install it to /usr/local/lib/ and then it would be super nice if one could do:

import deeplab2
vip = deeplab2.VIPDeeplab(path_to_ckpt_or_pb_file)
depth, semantic = vip(path_to_jpeg_file_or_numpy_image)

It was relatively easier to create something like this with the Deeplab v3+ repo but it seems much harder with the Deeplab2 repo.

Here's an example of how I deployed Deeplabv3+ before:
https://github.com/dheera/ros-semantic-segmentation/tree/master/semantic_segmentation/nodes/models/mnv2_coco2017_driving_513
It is only 3 files, a frozen model, a Python file, and a json configuration, nothing else, no dependencies other than tensorflow>=1.11, no build scripts, no installation, no environment variables, and can be run with something as simple as:

import mvn2_coco2017_driving_513
model = mvn2_coco2017_driving_513.Model()
semantic_output = model.infer([some_opencv_or_numpy_image])[0]

It would be nice if there were a guide to run Panoptic deeplab, ViP-deeplab, etc. using similarly simple code, i.e. download a .pb file and a minimal single Python file that can do the inference with a path to a .jpg or video device as a single argument. The latest results e.g. ViP-Deeplab look super promising but better documentation would be nice in this regard.

Hi,
You actually have notebooks for both. Deeplab_demo and DeepLab_VIP_demo.ipynb.
Check them out, they helped me a lot

Hi @dheera,

Thanks for opening the issue.
Unfortunately, we currently do not have any bandwidth to do that.
We would be happy to add a link to your repo if you manage to make it work.

We will be closing this issue, since it has nothing to do with the codebase/model itself.
In any case, please feel free to open a new one, if needed.

Cheers,