mitsuba-renderer/drjit

Creating DrJit variable from raw GPU buffer

holmes969 opened this issue · 7 comments

I am wondering if I can create a Dr.Jit variable from a raw GPU buffer (referenced with a pointer)? I checked the document but didn't find anything related. If this is supported, how will the GPU memory be managed properly?

Hi @holmes969
Are you looking to do this in Python? I don't think it's currently possible to do so. However the C++ API should allow you to do it.

I am going to do it in C++. Which specific function in C++ API should I use?

Sorry, I take what I said back. I don't think this is possible out-of-the-box in C++ either. Dr.Jit only tracks its own variables and memory.

Originally, I though this was supported but in fact in every example that I thought of Dr.Jit allocates a raw GPU buffer and passes it on to some third-party library which fills it.

Thanks for the clarifications! I have another related question. Is it possible to get the raw buffer in C++ API that Dr.Jit allocates for non-differentiable variables (e.g., FloatC) and modifies it directly using CUDA kernels?

Yes that is possible. DrJit types are mostly JitArray types or a composition of them. Calling JitArray::data() will return your the GPU buffer which you can modify. Dr.Jit still tracks that pointer and hence has ownership of that memory.

Note: Calling JitArray::data() will trigger a Dr.Jit kernel evaluation.

Can I close this?

I don't know how I had missed this originally, but if anyone ever comes across this thread again. It actually possible to pass create DrJit types from pre-allocated memory regions:
https://github.com/mitsuba-renderer/drjit/blob/master/include/drjit/array_router.h#L919-L924

The free argument indicates whether or not Dr.Jit should take ownership of the memory.