runwayml/processing-library

Is the computation done in remote? And how about the speed?

Jerryzhangzhao opened this issue · 2 comments

Hi, Thank you for your great work! And I would like to know that is the computation(NN forward inference) done in remote? And how about the speed (espacially for face landmark and human pose estimation)?

@Jerryzhangzhao Thank you for the kinds words.

This is more of a general Runway question than a question specific to this library, but I'll do my best to answer :)

And I would like to know that is the computation(NN forward inference) done in remote?

It depends:

  • if you select Remote (default) on many models in the advanced model settings (lower right side of the workspace screen), then yes, inference is done remotely
  • if you select Local depending on the setup you can choose CPU or GPU
  • as far as I know Runway supports CUDA GPU acceleration on Linux (but not on OSX) at the moment (due to how GPU drivers are implemented and how Docker can access them on various OSes)

And how about the speed (espacially for face landmark and human pose estimation)?

Results will differ, but it really depends on the setup (machine you're using, internet connection a bit for Remote inference, etc.). My advice is to give it a go first (even without Processing, just using Runway):

  • if you can use Runway with local GPU support that should be the fastest
  • without local GPU support I don't expect the speed to be amazing
  • for human pose, try the PoseNet model, but bare in mind some models you can access through other interfaces (e.g. ml5.js, TFJS)

My two pence is ML models can be resource hungry for pose estimation.
If it's speed you're after I wouldn't rule out using a completely different library for a depth sensor.
It really depends on the application, resource available (time, budget), but you can still do plenty of fun projects using the delay constraint creatively (e.g. creating a link between poses at different points in time)

Thank you for your detailed answer. I am going to have a try.