mosra/magnum

Parallel rendering with glb files

JayeFu opened this issue · 5 comments

Hi,

We have some *.glb files to render with magnum. Is there a functionality that we can use for parallel rendering (say, 20 different viewpoints in the same scene in parallel) in python?

Many thanks in advance!

Best,
Jiawei

Hi, a wild guess before we go into details, to avoid wheel reinvention -- is this in any way related to habitat-sim, or not at all? :) My answer will depend on that.

Hi, lol yep! We are using habitat-sim for simulation. :)

Now we are trying to put many (~20) rgb sensors on the robot. However, the rendering in habitat-sim seems to be sequential from this line. And we did some benchmarking and found that the rendering time is indeed linear w.r.t. to the number of sensors.

So we are thinking about whether we can circumvent habitat-sim and just using magnum for rendering in parallel.

In habitat-sim I'm working on a batch renderer that aims to solve this exact issue. Or at least to reduce some of the unnecessary repetition, such as calculating scene transformations and submitting the draws just once, because (apart from doing some multi-GPU submission, which is a complex topic on its own) processing the command buffer on the GPU to render everything is still inherently a linear operation.

In its current state it's however not exposed to Python yet, there's just a C++ API, it's currently just with multi-scene support, and to make full use of the speed improvements it requires source files to be processed to a so-called "composite file". Multi-sensor / multi-framebuffer support is on my TODO list (facebookresearch/habitat-sim#2170), I wasn't able to get back to that yet.

So we are thinking about whether we can circumvent habitat-sim and just using magnum for rendering in parallel.

To answer this -- yes, you theoretically could, the gfx_batch renderer mostly just puts together functionality that's available directly in Magnum.

Then it's a question of what you think would be the best path forward for you -- whether trying to bend / extend the current batch rendering code that's there, or whether trying to reimplement what you need with plain Magnum, or whether to wait until I get back to it and finish the multi-sensor / multi-framebuffer support.

Thanks a lot for the reply!

The gfx_batch seems to be very attractive. We are looking into it now.

If we choose to implement the stuff with plain Magnum, could you please point us to some pointers about the whole pipeline of using Magnum Python API for rendering .glb files? I guess we could also try directly parallelizing that.

Many thanks in advance!

The Python API isn't complete enough for batch-renderer-like workflow yet, you'll need to use the C++ APIs directly. A high-level rendering process and asset constraints are hopefully explained clear enough in the docs I linked, and I tried my best to make the gfx_batch::Renderer internals documented and clear to follow as well.

According to my benchmarks, the scene setup & submission process in the batch renderer itself wasn't really a bottleneck where any parallelization would help much, most of the time was spent waiting on the GPU, and for example the transformation processing was just about 2% of the frame time. With the pre-processed assets there isn't really any other complexity happening that would need to be heavily parallelized to achieve substantial speed improvements. What was slow in comparison was the high-level (Python) code managing the scenes, physics, etc.

Have fun, and let's use the habitat-sim issue tracker for future questions re the batch renderer :)