Does the demo generate the annotation files for the test images?
MoWizsubhi84 opened this issue · 6 comments
Hi, sorry to bother you with this question, I know that i have to downgrade my tf to 1.3 and cuda to 8.0 , however I'm just wondering if the demo generates the annotation files.
I'm a PhD researcher and I'm working on food segmentation and your tool will be a great help and boost forward.
wonder if the tool is released soon.. or should I downgrade and start with the demo?
thanks for your help in advance
Hi,
The demo right now does not generate an annotation file, but if you follow the associated ipython notebook, you will see that there are predicted polygons in numpy arrays, that you could potentially use to hack it according to your need. Another hack you would need is to draw a bounding box on an image and feed that into the network (instead of using pre computed crops as the demo currently does). We also plan to release a graph that incorporates our correction mode (as in our paper and in PolygonRNN), to enable interaction on our released model instead of just one shot prediction, which we have released right now.
We plan to release the tool before CVPR 2018 (starts June 18), but we are not yet clear ourselves as to whether the tool will be publicly available for large scale annotation tasks since it would require us to allocate a large number of GPUs from our side. It is most likely to be a demo of a fully functional annotation tool that could be used by clients on their side. We could chat offline about that.
Hi, @amlankar . Is the tool developed as of now? I am working on semantic segmentation for autonomous navigation , So, a tool would help us greatly in annotating images for the same?
Hi, let's continue this conversation over email?
We are accepting requests for closed short time releases of the tool right now. Here is the link to the signup form.
We have released training/tool code! https://github.com/fidler-lab/polyrnn-pp-pytorch